Skip to Main Content

Artificial Intelligence (A.I.)

What is Generative A.I.?

What is generative AI?

Generative AI can learn from existing artifacts to generate new, realistic artifacts (at scale) that reflect the characteristics of the training data but don’t repeat it. It can produce a variety of novel content, such as images, video, music, speech, text, software code and product designs.  

Generative AI uses a number of techniques that continue to evolve. Foremost are AI foundation models, which are trained on a broad set of unlabeled data that can be used for different tasks, with additional fine-tuning. Complex math and enormous computing power are required to create these trained models, but they are, in essence, prediction algorithms. 

Today, generative AI most commonly creates content in response to natural language requests — it doesn’t require knowledge of or entering code — but the enterprise use cases are numerous and include innovations in drug and chip design and material science development. (Also see “What are some practical uses of generative AI?”)

 

For more information from Gartner (an American technological research and consulting firm), please visit:

https://www.gartner.com/en/topics/generative-ai

What is Generastive AI?

AIPRM A.I. Glossary

AIPRM’s Ultimate Generative AI Glossary

From students to professionals, this resource is designed to empower every reader with a solid and clear understanding of the critical concepts that drive generative AI.

From exploring foundational terms such as neural networks and deep learning to understanding the nuances of GANs, this glossary has been curated to offer a structured and comprehensive pathway to becoming proficient in the language of generative AI.

Use this easy-to-understand glossary to confidently navigate the dynamic world of generative AI. Think of each term as a tool, helping you build a stronger understanding and fostering meaningful discussions in this fast-paced field. Discover and take command with the unmatched guidance provided by this resource, ready at your fingertips.

https://www.aiprm.com/resources/guides/generative-ai-glossary/

Thanks to the Fuller Library Girl's STEM Club for highlighting this resource!

ChatGPT

What is ChatGPT?

ChatGPT, created by the research company OpenAI, is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with the chatbot. The language model can answer questions and assist you with tasks, such as composing emails, essays, code, and creating images. The company launched ChatGPT on November 30, 2022.

OpenAI is also responsible for creating Whisper, an automatic speech recognition system. 

 

Ortiz, S. (2023, August 14). What is ChatGPT and why does it matter? Here's what you need to know. ZDNET. https://www.zdnet.com/article/what-is-chatgpt-and-why-does-it-matter-heres-everything-you-need-to-know/ 

MIT RSS Feed

Loading ...

Open A.I. RSS Feed

Loading ...

O'Reilly Videos

Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs

The advancement of Large Language Models (LLMs) has revolutionized the field of Natural Language Processing in recent years. Models like BERT, T5, and ChatGPT have demonstrated unprecedented performance on a wide range of NLP tasks, from text classification to machine translation. Despite their impressive performance, the use of LLMs remains challenging for many practitioners. The sheer size of these models, combined with the lack of understanding of their inner workings, has made it difficult for practitioners to effectively use and optimize these models for their specific needs.

Quick Start Guide to Large Language Models: Strategies and Best Practices for using ChatGPT and Other LLMs is a practical guide to the use of LLMs in NLP. It provides an overview of the key concepts and techniques used in LLMs and explains how these models work and how they can be used for various NLP tasks. The book also covers advanced topics, such as fine-tuning, alignment, and information retrieval while providing practical tips and tricks for training and optimizing LLMs for specific NLP tasks.

This work addresses a wide range of topics in the field of Large Language Models, including the basics of LLMs, launching an application with proprietary models, fine-tuning GPT3 with custom examples, prompt engineering, building a recommendation engine, combining Transformers, and deploying custom LLMs to the cloud. It offers an in-depth look at the various concepts, techniques, and tools used in the field of Large Language Models.

Related Learning:

Topics covered:

  • Coding with Large Language Models (LLMs)

  • Overview of using proprietary models

  • OpenAI, Embeddings, GPT3, and ChatGPT

  • Vector databases and building a neural/semantic information retrieval system

  • Fine-tuning GPT3 with custom examples

  • Prompt engineering with GPT3 and ChatGPT

  • Advanced prompt engineering techniques

  • Building a recommendation engine

  • Combining Transformers

  • Deploying custom LLMs to the cloud