PART 1: NVIDIA Omniverse and Digital Twins – Building a Smart Hospital Speaker: Robert Rios, Developer, Mark III Systems; Principal, Mark III Innovation In this workshop, Mark III and NVIDIA will walk through the detailed step-by-step process of building a digital twin of rooms and sections of a smart hospital using NVIDIA’s Omniverse platform. In addition to 3D modeling with Omniverse and Omniverse-compatible apps like Blender, Maya, and USD Composer, this session will also touch on how to build connectors in Omniverse to pipe in data and telemetry, in addition to how to think about constructing teams to enable your institution to build digital twins, whether it be for smart hospitals or for research, clinical, or operational purposes.
PART 2: Intro to Large Language Models: LLM Tutorial and Disease Diagnosis LLM Lab Speaker: Michaela Buchanan, Data Scientist, Mark III Systems In this workshop we start by discussing what a large language model (LLM) is and some of the strengths and weaknesses of these models, looking at a handful of models and approaches. We cover the difference between pretraining and finetuning. Input processing is discussed by showing the steps of taking an input string and tokenizing it into input ids. QLoRa is presented as a means of greatly reducing computational requirements for LLM inference and finetuning. The concepts portion of the session concludes by discussing Hugging Face and their transformers library. The workshop starts with performing inference using the Hugging Face transformers library and the Falcon-7B-Instruct model. We then move to finetuning Falcon-7B-Instruct using the MedText dataset, where the goal is to take a prompt which describes symptoms of a medical issue and generate a diagnosis of the problem as well as steps to take to treat it.