Here's why a generative model alone is never good enough

Here
Artificial intelligence has made progress in recent years, and one of the most important advancements is generative AI. Generative AI offers immense potential for innovation and efficiency, spanning diverse fields. It can make it easier to interpret and understand existing content and automatically create new content. Developers are exploring ways that generative AI can improve existing workflows, with an eye to adapting workflows entirely to take advantage of the technology.
blog image
Yet, its deployment mandates a nuanced approach, addressing ethical concerns and potential biases.
Generative AI models, including those trained on large internet datasets, can produce outputs that may be inaccurate, biased, or inappropriate.
Below are a few drawbacks of Gen AI:
In brief,
  • Ethical and Biased Outputs Generative models can inadvertently generate biased or politically incorrect content, reflecting the biases present in their training data. Ensuring ethical use and minimizing bias is a significant challenge.
  • Computational Requirements Advanced generative AI models like GPT-4 demand substantial computational power, leading to higher costs for enterprises needing robust cloud resources or dedicated hardware, which may not be feasible for all.
  • Security and Privacy Concerns There are potential risks associated with using generative AI for malicious purposes, such as generating fake news or deepfakes. This raises concerns about privacy and misinformation.
  • Model Explainability and Repeatability A major challenge with generative AI is model explainability and repeatability. It's difficult to understand how these models generate outputs or maintain consistent performance, particularly in industries valuing accountability and traceability.
  • Scalability and Integration Challenges Incorporating generative AI into business processes and scaling it for enterprise-level needs can be complex. It demands technical proficiency and strategic alignment with business goals to enhance workflows effectively.
To address the above issues fine-tuning pre-trained models is beneficial.

What is Fine-Tuning in Generative AI?

blog image
Fine-tuning customizes pre-trained models for specific tasks or behaviors by adapting a broad model to a narrower subject or goal. Fine-tuning allows us to leverage the general knowledge and skills of a large and powerful model and apply them to a specific field or objective.

Difference between Pre-training & Fine-tuning Tasks in LLM

Aspect Pre-Training Fine-Tuning
Data Large, general corpus Smaller, domain-specific dataset
Objective Understand general language patterns Specialize in a specific task/domain
Training From scratch or from an existing base Further training of a pre-trained model
Outcome General-purpose language understanding Task/domain-specific performance
  • Pre-training establishes the core understanding of the language, echoing the method of teaching basic English skills to a child at the outset.
  • Fine-tuning specializes in this knowledge for specific tasks or domains, similar to teaching a subject like biology or law. Together, these two stages enable the creation of highly effective and adaptable language models that can be tailored for various applications.

Benefits of Fine-Tuning Pre-Trained Models

  • Fine-tuning boosts task-specific performance by aligning with domain-specific data, imparting specialized knowledge for generating accurate and contextually relevant outputs.
  • Moreover, fine-tuning reduces training time and computational resources, enabling developers to utilize existing knowledge and save time and costs compared to starting from scratch.
  • Finally, fine-tuning allows models to specialize in areas such as medical research, legal analysis, or customer support, unlocking valuable insights and delivering targeted solutions.

How does Fine-Tuning Work?

The process generally involves three key steps:
  • Dataset Preparation: Developers gather a dataset specifically curated for their desired task or domain. This dataset typically includes examples of inputs and corresponding desired outputs, which are used to train the model.
  • Training the Model: Using the curated dataset, the pre-trained model is further trained on the task-specific data. The model’s parameters are adjusted to adapt it to the new domain, enabling it to generate more accurate and contextually relevant responses.
  • Evaluation and Iteration: Once the fine-tuning process is complete, the model is evaluated using a validation set to ensure it meets the desired performance criteria. If necessary, the process can be iterated by the model again with adjusted parameters to improve performance further.

The future of Business with Generative AI

With the fine-tuning, Generative AI presents a wide array of opportunities and benefits for businesses seeking to transform their operations and drive innovation.
blog image
Let's look at real-world instances where Generative AI has faced challenges
  • Amazon’s AI for recruitment was trained to be Misogynistic The Amazon engineers realized that they’d taught their own AI that male candidates were automatically better. From its training data, Amazon’s AI for recruitment “learned” that candidates who seemed whiter and more male were more-likely to be good fits for engineering jobs.
  • Racial Bias in Amazon face Recognition Amazon’s AI-based Rekognition facial recognition system matches 28 U.S. Congress people with Criminal Mugshots who were of colour. Nearly 40 percent of Rekognition’s false matches in the test were of people of color, even though they make up only 20 percent of Congress.
  • Google's AI chatbot Bard makes factual error in first demo Google's parent company, Alphabet, loses $100 after its new chatbot Bard when launched, had to answer various questions, including explaining the James Webb Space Telescope's findings to a nine-year-old. Despite NASA's confirmation that the Very Large Telescope identified an exoplanet in 2004, Bard incorrectly claimed the telescope took the "very first pictures of a planet outside our solar system”.

Why do customers opt out of a call or a chat?

Bad customer service is when a customer feels their expectations were not met. According to Zendesk Customer Experience Trends Report 2021 found that 75% of people are willing to spend more money on a brand that provides a stellar experience. The top indicators of poor customer service include:
  1. Lack of empathy
  2. Customers can't reach you
  3. Poor automated phone prompts
  4. Long wait times
  5. Being transferred multiple times
Here is where Generative AI comes into picture. Generative AI enables organizations to offer 24/7 real-time support, ensuring that customers can access assistance whenever they need it. By automating customer support processes like ticket routing and issue resolution, generative AI can help reduce resolution times and prevent delays in providing support.

Consumers’ POV: Consumers’ Distrust

As the generative AI hype grows, the Salesforce survey shows that only 13% of consumers completely trust companies to use AI ethically. Whereas 10% of consumers have complete distrust of the usage of generative AI by companies.
Consumers are concerned about data security risks, unethical use of AI, and biases. Over 89% of consumers believe knowing if they communicate with AI or a human is important.
And 80% of consumers have also highlighted that it is important for a human to stay in the loop to validate the output generated by an AI tool.
blog image
Source : Salesforce
It’s always been important to collect quality data and ensure transparency and consent in the collection process. But it’s not just about taking data in. It’s also about what happens to that data once we have it.
Companies may need data as much as ever, but the best thing they can do to protect customers is to build methodologies that prioritize keeping that data — and their customers’ trust — safe.
This is why AI governance plays a major role in building customers’ trust.

Why is AI governance needed?

To navigate the current absence of regulations, organizations utilizing Generative AI must take on the responsibility of self-regulation. Governance is an indispensable part of using AI to benefit the organization and society.
Here are four reasons why AI governance should be a priority.
  1. Respecting ethical and moral considerations AI systems can cause societal consequences, introducing biases in decisions. AI governance mandates accountability, requiring organizations to consider societal impacts and implement systems fairly, transparently, and in alignment with human values and individual rights.
  2. Complying with legal and regulatory compliance AI regulations globally gain attention. In this evolving landscape, AI governance best practices ensure alignment with existing laws. Essential for data security and privacy, AI governance standards uphold compliance with relevant laws.
  3. Managing risk AI use entails risks like trust loss, skill erosion, and biases. AI governance offers a framework to identify and manage these risks effectively.
  4. Maintaining trust AI algorithms, often opaque, pose challenges for stakeholders. AI governance promotes transparency by requiring detailed information about data sources and algorithms. This transparency builds trust with employees, customers, and community stakeholders.
At SBA, we've strategically partnered with global trailblazers like IBM, who have introduced IBM® watsonx.governance™ toolkit for AI governance which allows you to direct, manage and monitor your organization’s AI activities.
Some of the key features of watsonx.governance :
  • Accelerate responsible, transparent and explainable AI workflows across the entire lifecycle.
  • Automate and consolidate tools, applications, and platforms
  • Govern ML models, including those from 3rd parties and generative models (now in tech preview, GA in December 2023)
  • Manage risk and protect reputation setting tolerances to proactively detect bias and drift
  • Capture metadata and document lineage throughout the model lifecycle
  • Improve adherence to AI regulations such as the proposed EU AI Act, internal policies and industry standards
  • Improve collaboration & communication with customizable dashboards & reports. It uses software automation to enhance risk mitigation, regulatory compliance, and ethical considerations for both generative AI and machine learning (ML) models.

The Way Forward

In summary, Generative AI has the potential to revolutionize various industries and make our lives easier. However, like any technology, there are risks that need to be addressed. Responsible governance and regulation can maximize the potential of generative AI while minimizing its negative impacts.
Continuous improvement and adaptation to evolving circumstances are essential, ensuring that AI systems remain effective over time. By addressing these considerations comprehensively, organizations can harness the full potential of AI, fostering innovation, ethical practices, and positive impacts on user experiences.
Venkatesh
Written by
Venkatesh A
Venkatesh works with global change makers IBM to specialize in implementing generative AI, LLMs, and cutting-edge data technologies to address complex business problems. A certified expert on watsonx, He's passionate about exploring uncharted territories to find innovate solutions. By leveraging the technical intricacies of AI, he's responsible for driving data-driven strategies and creating tangible value for India's CXO's and IT teams"