Skip to Main Content
WELCOME TO CREX, THE COLLABORATIVE RESEARCH EXCHANGE FOR THE NIH INTRAMURAL RESEARCH PROGRAM

Go to Main Navigation

Scientist.com AI Innovation Series Recap

Explore the “hows” and “whys” of language models as well as their critical role in advancing research and innovation. In our recent AI Innovation Series, our Chief Technology Officer and Co-founder, Chris Petersen, provides an in-depth, educational overview of AI language models, covering the foundational principles of neural networks and how language models use a numerical representation of language to interpret and generate human-like text. These sessions are designed not only for experienced professionals, but also beginners in the field that are eager to gain hands-on knowledge of how AI language models work as well as how they are impacting the biopharma industry.

Language Learning Models: Basics of Neural Networks

In this first episode, you’ll gain a thorough understanding of how neural networks learn from the ground up. We begin by demystifying what neural networks really are, setting a solid foundation for beginners and enthusiasts alike. Dive deep into the architecture of these data structures that are modeled after brains as we dissect the structure of neural nets, revealing how layers of mathematics and data processing simulate the learning process. Witness the magic of feedforward computation, where the input is transformed step-by-step into an output that can approximate any function with astonishing accuracy. Lastly, we unravel the complexities of backpropagation — the method by which neural networks learn from their mistakes. Discover how subtle adjustments in their digital synapses enable them to improve over time, much like the human brain.

Language Learning Models: Understanding Embeddings

Building upon the previous session, this webinar focuses on the transformative role of embeddings, aka word vectors — a pivotal step in refining the architecture of language models.

Delve deep into the fascinating process of semantic mapping as we reveal how embedding transforms words into numerical entities, enabling machines to grasp the subtle nuances of human language. Discover how these vectors capture the essence of word meanings and their complex relationships, forming the bedrock for models that generate and understand language with remarkable accuracy.

Language Learning Models: Overview of the Attention Mechanism

As an essential continuation of our Language Model series, this session promises a detailed overview of the Attention Mechanism, a transformative element that has revolutionized the way machines understand and generate human language.

We delve into the world of Transformers, groundbreaking models that utilize matrix inputs to produce contextually relevant outputs for words in a given vocabulary. These models have paved the way for tackling the context problem in language understanding, allowing for unprecedented accuracy and fluency in AI-generated text.

Language Learning Models: The Transformer

In this webinar, we deep dive into the transformative world of the Transformer architecture – the powerhouse behind today’s most advanced language models. In this conclusive session, we unravel the intricacies of Transformers as they navigate the complexities of high dimensional meaning spaces to redefine the capabilities of AI in understanding and generating human language.

We embark on a comprehensive exploration of the Transformer architecture, examining its unique ability to refine its understanding of language through advanced training techniques.


Thank you for joining us on this journey through the world of AI language models and their transformative impact on research and biopharma. We hope these insights have empowered you with the knowledge to embrace AI innovations in your field. Additionally, learn more about our AI-powered procurement orchestration platform, here.