Current research at Data Science Lab includes Generative AI based on Large Language Models (LLMs) in investment, healthcare, and law applications.
LLM-based Generative AI in investment applications
LLMs can extract patterns and trends from news, market reports, financial statements, and even social media sentiment. This helps investors in tasks such as summarization of the market, questions and answers about the market, and more. Retrieval-Augmented Generation (RAG) is a technique that enhances the capabilities of large language models (LLMs) by integrating them with an external information retrieval system. This approach allows LLMs to access and incorporate relevant information from authoritative sources outside their training data, improving the accuracy and relevance of their responses. By combining the generative power of LLMs with data retrieval, RAG can provide more nuanced and contextually appropriate answers, making it particularly useful for applications requiring up-to-date and domain-specific information. We explore a RAG-based approach for analyzing and summarizing company health for investment.
LLM-based Generative AI in healthcare applications
Large Language Models (LLMs) have shown remarkable capabilities in natural language processing, greatly enhancing human-computer interactions. We explore the fine-tuning of LLMs for medical applications, training them on specialized medical texts such as patient records, medical literature, clinical notes, and online doctor-patient conversations. This fine-tuning process enables LLMs to provide clinical decision support, answer medical queries, and assist in diagnostic tasks, effectively functioning as medical chatbots to enhance healthcare delivery. Our approach focuses on resource-efficient fine-tuning, to develop effective models for modern medical practice with limited resources.
Multi-agent collaboration involves breaking down complex tasks into smaller, manageable subtasks, each handled by different AI agents with specific roles, much like a human team. This approach uses multiple LLMs to perform distinct tasks, each with its own workflow and memory, providing a structured and efficient method for task management. By distributing responsibilities among specialized agents, this method enhances overall efficiency and accuracy in completing complex tasks. We are exploring applications of agentic approaches vs non-agentic approaches in distributed settings and measure their performance.
LLM-based Generative AI in law applications
A multi-agent system architecture allows agents to collaborate in resolving complex legal issues. This is a highly innovative and advanced research approach, offering superior performance compared to systems without multi-agent frameworks. A significant advantage of GenAI is its ability to minimize the “hallucination” problem often encountered in large language models (LLMs) when processing legal information – a domain that demands absolute precision. To address this issue, one of the key techniques used is Retrieval-Augmented Generation (RAG), which integrates directly relevant legal documents into the context of the query to provide accurate answers and references to original sources. This ensures authenticity and allows users to trace the origins of the information, enhancing the product’s reliability. User questions, answers, and feedback are also stored for refining the model over time.