Back

Chat GPT – the Unreleased Yet Monster

Introduction:

Today, many AI solutions are being used in various areas in order to examine data better and make decisions. However, 2022 surely remained the top for these AI-based products and solutions. According to a poll by Toolbox, approximately 42% of tech experts said that AI is the largest and most used technology for 2022. (Athavale, 2022)

Undoubtedly, AI applications have a great impact on what information we see online by predicting which type of content engages us, by gathering and analyzing facial data to personalize adverts, or enforcing laws; AI affects many different areas of our lives. For instance, the European Union AI Act is likely to become a worldwide standard assessing if AI affects your life positively or negatively wherever you may be, which is quite similar to the EU’s General Data Protection Regulation (GDPR) in 2018. (“The Artificial Intelligence Act,” n.d.)

For its use of artificial intelligence in the form of ChatGPT, OpenAI made quite the headlines last year. For natural language processing, such as text production and language translation, OpenAI created the ChatGPT big language model. The GPT-3.5 (Generative Pretrained Transformer 3.5) model serves as one of its foundations. (“ChatGPT: The Most Advanced AI Chatbot,” n.d.)

Framework / Foundational AI:

Today, AI systems are mostly built on models that can recognize what and when we speak or write, for instance, the natural language processing and the programs that we use almost every day in our lives. (Murphy, 2022). Actually, these foundational models are made to understand a wide range of data with the help of self-supervision at scale in order to be adjusted for performing a variety of tasks.

Nowadays, models like GPT-3, DALL-E 2, BERT, and Stable Diffusion are becoming a foundation for AI systems. GPT-3 uses deep structured learning algorithms in order to provide content that looks like a human-written text. It is most commonly used on various websites to produce product descriptions etc. DALLE-E 2 uses the ‘diffusion’ method to produce realistic artwork from a text provided in natural language. BERT assists many AI programs to better understand the meaning of dubious words in content concurrently in left-to-right and right-to-left directions. (Rouse, 2022). Stable Diffusion is an image-producing method that provides precise images from text descriptions along with being used in in-painting and out-painting. (Marek, 2022)

The adoption of AI in businesses will be significantly accelerated by these foundation models in the near future. These foundational AI can help lower the labeling requirements, with the help of which businesses will find it much simpler to get started (Murphy, 2022).

Detractors of foundation models say that this kind of adaptable ‘large-scale-neural-network-in-a-can’ utilizes so much input and has so many deep learning layers that it becomes impossible for a person to comprehend how a modified model calculates a certain output. Foundation models are vulnerable to data poisoning attacks that disseminate false information or purposefully induce machine bias. (Rouse, 2022). Besides, these models can imitate fanatic content and can potentially be used to decentralize people into extremist ideologies. (Haataja, 2022)

Accountability Matters in AI Governance:

One of the main factors weakening accountability in algorithmic societies is the “problem of many hands,” which refers to the difficulty of assigning moral blame for results brought on by various agents. Particularly with methods like machine learning that continuously evolve and update during their life cycle, Artificial Intelligence is fundamentally non-deterministic in nature. While the need to incorporate data protection by design and default into a company’s culture and operations is increased by AI, doing so may be more challenging due to the technical complexity of the AI systems. (“Accountability Implications of AI,” n.d.)

As enterprises increasingly adopt AI-based technology, often without adopting the right controls in place, experts anticipate seeing more instances of harmful AI. Unintentional biases and algorithmic drift in AI models can be avoided with the aid of a governance structure and more attention to AI ethical issues. However, many experts are of the opinion that the development of ethical AI needs consideration of three important areas i.e.,

  • Practical execution
  • The data it consumes
  • How the system is being utilized.

According to Srivastava, the digital head officer at Genpact, businesses that pay close attention to each of these three areas are more likely to develop AI-based systems that deliver outcomes that are accurate and equitable. Establishing moral AI and AI accountability also entails taking precautions against improper applications. Most AI systems are created for a certain use case, and utilizing them for another would lead to inaccurate results. (Pratt, 2021)

According to DeepMind researchers, the problems caused by huge language models need to be solved by working together with a variety of stakeholders and relying on an explain-ability level that is high enough to enable quick detection and evaluation of problems. These models have become too potent to use without review by academics and impartial auditors.

Way Forward:

AI is undergoing a change, which we can see. Systems that carry out particular tasks in a single area are being replaced by AI, which learns more broadly and solves problems across disciplines. Today, without obstacles, technology wouldn’t be able to advance and expand.

Overall, ChatGPT is a strong and adaptable tool for tasks involving natural language processing. Users should be aware of its limits, though, as with any machine learning model. Large generative models are quite unpredictable, which makes it challenging to foresee the effects of their creation and application. It is crucial to have an additional level of transparency feedback loops in order to enable both the sector and regulators to ensure better-aligned and safer solutions because of this unexpected nature and the many-hands problem.

GPT-3 was effectively trained using alignment techniques, which use human-provided feedback to guide AI to create less offensive language and false information, making only fewer mistakes. Forty human trainers were employed to phrase and grade the GPT-3’s responses, and the approach used reinforcement learning to educate the model. The results of the aligned model were well welcomed, and this example emphasizes the significance of including humans in the evaluation of AI results.

References