Boost Insights with Logilica's AI Advisor

Unlocking Insights with AI: Automating Data-Driven Analysis

In today’s data-driven environment, there is truly no end to the overwhelming amount of information that both contributors and users have access to. People often spend hours combing through logs trying to piece information into usable goods. This turns what could be swift, data-driven decisions, into time-consuming challenges that slow down innovation.

For this modern age of engineering, artificial intelligence (AI) is a necessity. Integrating AI into your products covers numerous potential user needs, convenience through availability, saving a limitless amount of time and unlocking complex insights. With large language models (LLM), AI-powered assistants can sift through vast amounts of data to find the relevant details that you’re looking for. It surfaces patterns, offers recommendations, and predicts your needs, while ensuring valuable information isn’t missed or overlooked.

Inspired by AI, the Logilica Labs team has developed an intuitive and user-friendly AI assistant designed specifically for Logilica’s data model on the Software Development Life Cycle (SDLC) pipeline. It interacts with the user’s queries about their performance metrics from their projects. This article showcases the intricacies and findings in developing our initial AI assistant. While we have matured from that since, we still believe this article gives some valuable insights.

Logilica’s AI Advisor (v1)

How it works

Our AI assistant responds to questions relating to the SDLC pipeline data available in Logilica’s data model. This process involves extracting keywords from the prompt, generating a data query, obtaining the data and summarising the relevant insights for the user’s needs. The following stages outline how our AI Advisor goes through this process.

  1. Processing User Queries
    The user’s queries are obtained through the user interface (UI) for our AI Advisor, designed with a modern chatbot layout. When a user submits a query, our assistant interprets it using LangChain and OpenAI’s large language models (LLM). Using natural language processing (NLP), it extracts the relevant metrics, filters, and the intent of the query. This ensures even vague or loosely structured questions can be understood accurately. Here’s an example.

    User query: What is the average cycle time for pull requests that were merged in the past month?
  • Metric: Pull request cycle time
  • Filters: Past month, merged
  • Intent: Average
  1. Embedding Generation & Vector Search
    For more flexible and context-aware results, our system employs a vector search. This search retrieves semantically similar information rather than relying solely on keyword matching. This enhances the assistant’s ability to find relevant insights, even when user queries are imprecise or vague. This is why our AI Advisor is more flexible than a search algorithm.

This is all possible because our initial database setup involved designing a unique schema for Logilica’s data, which stores the associated metadata and its vector embeddings. Once the database was populated through a data ingestion pipeline, our AI Advisor converts user queries into embeddings and conducts similarity searches. The embeddings are compared against a vector database that stores indexed queries and data-related concepts.

Vector stores and similarity search flow by LangChain (2024).
  1. Cube Query Generation
    Once relevant keywords and context are identified, our AI Advisor constructs a structured query for Logilica’s data layer. Our data layer acts as a powerful data modelling and aggregation layer, allowing our assistant to generate optimised SQL-like queries, allowing it to communicate with the database. These queries fetch structured data efficiently while applying necessary filters and calculations based on the user’s request. This process makes the system adaptable, allowing users to express their data needs in conversational language while maintaining data integrity.

  2. Data Extraction & Processing
    The data layer executes the generated query, retrieving the necessary data from connected sources. This ensures that results are not only accurate but also optimised for performance when dealing with larger sets of information.

  3. Response Generation & Formatting
    After retrieving the data, LangChain templates are used to format the response into a summary. This means the format is convenient and flexible for the user’s needs. To summarise the results, structured prompts are constructed, incorporating the user’s question, the data query, and the result. This process ensures that the LLM generates context-aware, accurate summaries aligned with the user’s intent and the retrieved data. Depending on the user’s needs, our assistant may provide a direct answer, a statistical overview, or a detailed explanation of trends and insights.
Architecture and Process Flow for Logilica's AI Advisor (v1)

Handling Errors and Hallucinations

Logilica’s AI Advisor is designed to handle edge cases such as hallucinations and irrelevant data by incorporating mechanisms that improve its accuracy and reliability. To prevent hallucinations, the system uses a confidence-based approach, where it tries to identify when it lacks sufficient information to answer a query accurately. In this instance, instead of fabricating a response, it clarifies the uncertainty to the user.

For irrelevant or unanswerable queries, our advisor has built-in safeguards that detect when the question doesn’t align with the available data. For instance, if the model encounters a query like “What is my name?” it recognises that the information isn’t accessible and prompts the user to provide more context or reformulate the question. This approach ensures that our AI Advisor doesn’t generate speculative or incorrect answers, maintaining the quality and reliability of responses.

AI Advisor: The Next Generation 

Building on our experience described in this article we are building a much enhanced version 2.0, which is available in private beta right now. We will go into more details of that architecture and design in the near future, but you can see a sneak peak below or contact us to get access to it.

Logilica AI Advisor 2.0

Summary

Our AI-powered assistant is revolutionising how we interact with data, making it easier and faster to extract valuable insights. By understanding user queries, generating structured searches, and continuous improvements in AI technology we will only enhance our capabilities, making them indispensable for businesses and individuals alike. Whether you’re a developer, project manager, or data analyst, an AI Advisor can help you unlock the full potential of data available across your SDLC pipeline, transforming overwhelming data into meaningful insights.

Alice Cui
AI Engineer
Join the Community.
Don't miss the next article.

Trending  Posts