{"id":4823,"date":"2024-04-10T13:49:31","date_gmt":"2024-04-10T13:49:31","guid":{"rendered":"https:\/\/kindgeek.com\/blog\/?p=4823"},"modified":"2025-05-20T10:48:28","modified_gmt":"2025-05-20T10:48:28","slug":"5-steps-to-develop-an-ai-powered-telegram-bot-with-langchain4j-in-java","status":"publish","type":"post","link":"https:\/\/www.kindgeek.com\/blog\/post\/5-steps-to-develop-an-ai-powered-telegram-bot-with-langchain4j-in-java","title":{"rendered":"5 steps to develop an AI-powered Telegram bot with Langchain4j in Java"},"content":{"rendered":"<div class=\"inhype-post\"><p class=\"post-date\">Recently updated on May 20, 2025<\/p><\/div>\n<h2 class=\"wp-block-heading\">Introduction or \u201cWhat problem are we solving?\u201d<\/h2>\n\n\n\n<p id=\"afe1\">This article aims to illustrate <a href=\"https:\/\/medium.com\/@Michael.Kramarenko\/5-steps-to-develop-an-ai-powered-telegram-bot-with-langchain4j-in-java-408868ba87ba\">how to create a\u00a0<strong>Lego of reusable building blocks<\/strong>\u00a0<\/a>for streamlining the development of smart bots powered by large language models (<strong>LLMs)<\/strong>. We\u2019ll explore how to build such a\u00a0<strong>Lego\u00a0<\/strong>utilizing features provided by\u00a0<a href=\"https:\/\/docs.langchain4j.dev\/\" rel=\"noreferrer noopener\" target=\"_blank\">Langchain4j<\/a>\u00a0<a href=\"https:\/\/github.com\/langchain4j\/langchain4j\" rel=\"noreferrer noopener\" target=\"_blank\">v.0.28.0<\/a>, a framework tailored to speed up the development of LLM-powered Java applications.<\/p>\n\n\n\n<p id=\"fe00\">We\u2019ll walk you through the process of a bot integration with the Retrieval-Augmented Generation (<strong>RAG<\/strong>) infrastructure. This approach extends LLM\u2019s conversational capabilities by allowing them to access specific domains or an organization\u2019s internal knowledge base. By seamlessly incorporating RAG, the chatbot gains the power to retrieve and leverage the most up-to-date and relevant information without the need for retraining.<\/p>\n\n\n\n<p id=\"d2d0\">By leveraging RAG, we\u2019ll be able to transform several use cases:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>\u201cEnhanced Conversational Agent\u201d<\/strong>&nbsp;to make conversations efficient, personalized, and accurate. This could be used in customer support bots, educational tutors, or virtual assistants, providing them with the ability to use specific data.<\/li><li><strong>\u201cKnowledge-base Explorer\u201d<\/strong>&nbsp;for domains where large volumes of documents need to be analyzed and referenced, RAG can automate and enhance the process of finding relevant case law, precedents, etc. In this case, RAG can augment data with additional information from external sources or databases, thereby assisting professionals in making informed decisions.<\/li><li><strong>\u201cQuestion Answering System\u201d,<\/strong>&nbsp;like a company policy explorer, where RAG can significantly improve such systems by fetching relevant documents or data snippets to provide precise, accurate answers to user queries. This is especially useful for fact-based questions where up-to-date information is crucial.<\/li><\/ul>\n\n\n\n<p id=\"544f\">Our code examples, provided in this article, primarily focus on the bot\u2019s text modality.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"3263\">Prerequisites<\/h3>\n\n\n\n<p id=\"a213\">Before we begin, make sure you have the following:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Telegram bot name and token registered using&nbsp;<a href=\"https:\/\/t.me\/botfather\" rel=\"noreferrer noopener\" target=\"_blank\">Telegram Botfather<\/a>.<\/li><li>LLM API access:<\/li><li>API key (like&nbsp;<a href=\"https:\/\/platform.openai.com\/api-keys\" rel=\"noreferrer noopener\" target=\"_blank\">OpenAI API Key<\/a>) for LLMs supported AIaaS (like&nbsp;<a href=\"https:\/\/openai.com\/\" rel=\"noreferrer noopener\" target=\"_blank\">OpenAI<\/a>&nbsp;,&nbsp;<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-services\/openai\/overview\" rel=\"noreferrer noopener\" target=\"_blank\">Azure OpenAI<\/a>,&nbsp;<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-services\/openai\/overview\" rel=\"noreferrer noopener\" target=\"_blank\">Azure OpenAI<\/a>, etc.)<\/li><li>Locally installed LLM (like&nbsp;<a href=\"https:\/\/localai.io\/\" rel=\"noreferrer noopener\" target=\"_blank\">LocalAI<\/a>&nbsp;,&nbsp;<a href=\"https:\/\/github.com\/jmorganca\/ollama\/tree\/main\" rel=\"noreferrer noopener\" target=\"_blank\">Ollama<\/a>&nbsp;etc.) with corresponding models shown in the picture below<\/li><\/ul>\n\n\n\n<p class=\"has-text-align-center\"><img decoding=\"async\" src=\"https:\/\/lh7-us.googleusercontent.com\/ZoASSxguuLat3sqPYv-nQ4LfI7r1pxmMlPFRuAwva1ke3rlfr-3JLLBrQM35FiAa9dka8izWx_swFtjcxEw1ewvKH_XitapIBi8LUJitetSYFhPt65EB8Leb604TeqZA2p5P3kM4bL_uX2uoYeeehYs\" style=\"width: 500px;\"><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"46ef\">Step 1: Creating a Spring Boot project<\/h2>\n\n\n\n<p id=\"ee68\">To begin building our chatbot we need to properly configure the development environment. Let`s start with creating a Spring Boot project and including some of the necessary dependencies:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Create a Spring Boot 3.2.1 Maven project using, for example,&nbsp;<a href=\"https:\/\/start.spring.io\/\" rel=\"noreferrer noopener\" target=\"_blank\">Spring Initializr<\/a>.<\/li><li>Add&nbsp;<a href=\"https:\/\/mvnrepository.com\/artifact\/org.telegram\/telegrambots\/6.9.7.1\" rel=\"noreferrer noopener\" target=\"_blank\">telegram bot 6.9.7.1<\/a>&nbsp;support<\/li><li>Add&nbsp;<a href=\"https:\/\/mvnrepository.com\/artifact\/dev.langchain4j\/langchain4j\/0.28\" rel=\"noreferrer noopener\" target=\"_blank\">Langchain4j 0.28<\/a>&nbsp;Maven dependencies. We can start our journey with \u200b\u200b<a href=\"https:\/\/mvnrepository.com\/artifact\/dev.langchain4j\/langchain4j\/0.27.1\" rel=\"noreferrer noopener\" target=\"_blank\">langchain4j<\/a>,&nbsp;<a href=\"https:\/\/mvnrepository.com\/artifact\/dev.langchain4j\/langchain4j-open-ai\/0.27.1\" rel=\"noreferrer noopener\" target=\"_blank\">langchain4j-open-ai<\/a>,&nbsp;<a href=\"https:\/\/mvnrepository.com\/artifact\/dev.langchain4j\/langchain4j-ollama\/0.28.0\" rel=\"noreferrer noopener\" target=\"_blank\">langchain4j-ollama<\/a>,&nbsp;<a href=\"https:\/\/mvnrepository.com\/artifact\/dev.langchain4j\/langchain4j-document-parser-apache-pdfbox\/0.28.0\" rel=\"noreferrer noopener\" target=\"_blank\">langchain4j-pdf-parser<\/a>,&nbsp;<a href=\"https:\/\/mvnrepository.com\/artifact\/dev.langchain4j\/langchain4j-document-parser-apache-poi\/0.28.0\" rel=\"noreferrer noopener\" target=\"_blank\">langchain4j-apache-poi-parser<\/a><\/li><li>Feel free to add other dependencies like&nbsp;<a href=\"https:\/\/mvnrepository.com\/artifact\/org.projectlombok\/lombok\/1.18.30\" rel=\"noreferrer noopener\" target=\"_blank\">Lombok<\/a>,&nbsp;<a href=\"https:\/\/mvnrepository.com\/artifact\/org.slf4j\/slf4j-api\/2.0.10\" rel=\"noreferrer noopener\" target=\"_blank\">org.slf4j<\/a>,&nbsp;<a href=\"https:\/\/mvnrepository.com\/artifact\/commons-io\/commons-io\/2.15.1\" rel=\"noreferrer noopener\" target=\"_blank\">commons-io<\/a>,&nbsp;<a href=\"https:\/\/mvnrepository.com\/artifact\/org.springdoc\/springdoc-openapi-starter-webmvc-ui\/2.2.0\" rel=\"noreferrer noopener\" target=\"_blank\">Springdoc openapi<\/a>, etc.<\/li><\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"561f\">Step 2: Configuring LLM<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"2bf9\">LLM parameters configuration<\/h3>\n\n\n\n<p id=\"3fd9\">To configure model parameters for our project, we will use an externally configurable Builders (via application.properties or application.yml)<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"bffa\">AiaaS models<\/h3>\n\n\n\n<p id=\"123c\">For such types of models, externally configurable Builder&nbsp;<em>OpenAIChatLanguageModelBuilder<\/em>&nbsp;could look like the code below:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-luminous-vivid-amber-color\">@Service\n@Slf4j<\/span>\n<span class=\"has-inline-color has-vivid-red-color\">public class<\/span> OpenAIChatLanguageModelBuilder <span class=\"has-inline-color has-vivid-red-color\">extends<\/span><span class=\"has-inline-color has-luminous-vivid-orange-color\">\n<\/span>\n       OpenAIChatModelBuilderParameters\n       <span class=\"has-inline-color has-vivid-red-color\">implements<\/span><span class=\"has-inline-color has-luminous-vivid-orange-color\"> <\/span>ChatModelBuilder {\n\n\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Override<\/span>\n   <span class=\"has-inline-color has-vivid-red-color\">public <\/span>ChatLanguageModel build() {\n       <span class=\"has-inline-color has-vivid-purple-color\">ChatLanguageModel<\/span> <span class=\"has-inline-color has-vivid-green-cyan-color\">chatLanguageModel<\/span> = OpenAiChatModel.builder()\n               .apiKey(OPENAI_API_KEY)\n               .modelName(gptModelName)\n               .timeout(ofSeconds(timeoutSec.longValue()))\n               .logRequests(logRequests.booleanValue())\n               .logResponses(logResponses.booleanValue())\n               .maxRetries(maxRetries)\n               .temperature(temperature)\n               .build();\n       <span class=\"has-inline-color has-vivid-red-color\">return <\/span>chatLanguageModel;\n   }\n. . .\n}<\/code><\/pre>\n\n\n\n<p>inheriting from a parent&nbsp;<em>OpenAIChatModelBuilderParameters<\/em>&nbsp;class, like:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-vivid-red-color\">public abstract class<\/span> OpenAIChatModelBuilderParameters <span class=\"has-inline-color has-vivid-cyan-blue-color\">extends BaseChatModelBuilderParameters<\/span> {\n  <span class=\"has-inline-color has-luminous-vivid-amber-color\"> @Value<\/span>(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"<\/span>${OPENAI_API_KEY}<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"<\/span>)\n   String OPENAI_API_KEY;\n<span class=\"has-inline-color has-luminous-vivid-amber-color\"> @Value<\/span>(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"#{new Boolean('<\/span>${GPT.log.requests}<span class=\"has-inline-color has-luminous-vivid-orange-color\">')}\"<\/span>)\n   <span class=\"has-inline-color has-vivid-purple-color\">Boolean<\/span> logRequests;\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value<\/span>(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"#{new Boolean('<\/span>${GPT.log.responses}<span class=\"has-inline-color has-luminous-vivid-orange-color\">')}\"<\/span>)\n   <span class=\"has-inline-color has-vivid-purple-color\">Boolean <\/span>logResponses;\n...<\/code><\/pre>\n\n\n\n<p>inheriting from the code below accordingly:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-vivid-red-color\">public abstract class<\/span> BaseChatModelBuilderParameters {\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value<\/span>(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"<\/span>${GPT.modelName}<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"<\/span>)\n   String gptModelName;\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value<\/span>(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"#{new Integer ('<\/span>${GPT.maxRetries}<span class=\"has-inline-color has-luminous-vivid-orange-color\">')}\"<\/span>)\n   Integer maxRetries;\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value<\/span>(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"#{new Long ('<\/span>${GPT.timeout.sec}<span class=\"has-inline-color has-luminous-vivid-orange-color\">')}\"<\/span>)\n   <span class=\"has-inline-color has-vivid-purple-color\">Long <\/span>timeoutSec;\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value<\/span>(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"#{new Double ('<\/span>${GPT.temperature}<span class=\"has-inline-color has-luminous-vivid-orange-color\">')}\"<\/span>)\n   <span class=\"has-inline-color has-vivid-purple-color\">Double <\/span>temperature;\n\n\n...<\/code><\/pre>\n\n\n\n<p>Similarly, we can add some useful parameters like organization ID and seed, aimed at increasing AiaaS model determinism, etc.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"093b\"><strong>On-premise deployable models<\/strong><\/h3>\n\n\n\n<p id=\"b1ca\">Likewise, a corresponding ChatModelBuilder (for on-premise deployable models like Ollama) could look like this:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-luminous-vivid-amber-color\">@Service<\/span>\n<span class=\"has-inline-color has-luminous-vivid-amber-color\">@Slf4j<\/span>\n<span class=\"has-inline-color has-vivid-red-color\">public class<\/span> OllamaChatModelBuilder <span class=\"has-inline-color has-vivid-red-color\">extends <\/span>LocalChatModelBuilderParameters\n       <span class=\"has-inline-color has-vivid-red-color\">implements <\/span>ChatModelBuilder {\n\n\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Override<\/span>\n   <span class=\"has-inline-color has-vivid-red-color\">public <\/span>ChatLanguageModel build<span class=\"has-inline-color has-vivid-purple-color\">()<\/span> {\n       <span class=\"has-inline-color has-vivid-purple-color\">ChatLanguageModel <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">model <\/span>= OllamaChatModel.builder()\n               .baseUrl(baseURL) <span class=\"has-inline-color has-vivid-green-cyan-color\">\/\/http:\/\/localhost:11434<\/span>\n               .modelName(gptModelName)<span class=\"has-inline-color has-vivid-green-cyan-color\">\/\/phi, \"orca-mini\"<\/span>\n               .maxRetries(maxRetries)\n               .timeout(ofSeconds(timeoutSec))\n               .temperature(temperature)\n               .build();\n       <span class=\"has-inline-color has-vivid-red-color\">return <\/span>model;\n   }\n. . .\n}<\/code><\/pre>\n\n\n\n<p>inheriting from a parent&nbsp;<em>LocalChatModelBuilderParameters<\/em>&nbsp;class, like<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-vivid-red-color\">public abstract class<\/span> LocalChatModelBuilderParameters\n       extends BaseChatModelBuilderParameters{\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value<\/span>(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"<\/span>${GPT.baseURL}<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"<\/span>)\n   String baseURL;\n . . .\n}<\/code><\/pre>\n\n\n\n<p id=\"cedb\">During this step, keep attention on the following details:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>The key difference between builders for on-premise and AIaaS models is&nbsp;<em>baseURL<\/em>&nbsp;support<\/li><li>Both&nbsp;<em>OllamaChatModelBuilder<\/em>&nbsp;and&nbsp;<em>OpenAIChatLanguageModelBuilder<\/em>&nbsp;build ChatLanguageModel- the same abstraction for different language models.<\/li><\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"26b5\">Chat Memory Configuration<\/h3>\n\n\n\n<p id=\"77da\">For a simple scenario, we have two options:&nbsp;<em>TokenWindowChatMemory<\/em>, and&nbsp;<em>MessageWindowChatMemory<\/em>.<\/p>\n\n\n\n<p id=\"bf43\"><em>TokenWindowChatMemory<\/em>&nbsp;operates as a sliding window of&nbsp;<em>maxTokens<\/em>&nbsp;tokens. It retains as many of the most recent messages as can fit into the window. If there isn\u2019t enough space for a new message, the oldest one (or multiple) is discarded.<\/p>\n\n\n\n<p id=\"8d99\"><em>MessageWindowChatMemory<\/em>&nbsp;operates as a sliding window of&nbsp;<em>maxMessages<\/em>&nbsp;messages. It retains as many of the most recent messages as can fit into the window. If there isn\u2019t enough space for a new message, the oldest one is discarded.<\/p>\n\n\n\n<p id=\"a6bc\">In our project, we will use externally configurable&nbsp;<em>MessageWindowChatMemory<\/em>&nbsp;in the following way:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-luminous-vivid-amber-color\">@Service<\/span>\n<span class=\"has-inline-color has-luminous-vivid-amber-color\">@Slf4j<\/span>\n<span class=\"has-inline-color has-vivid-red-color\">public class<\/span> MessageWindowChatMemoryBuilderImpl <span class=\"has-inline-color has-vivid-red-color\">implements <\/span>MessageWindowChatMemoryBuilder {\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value(\"${chat.system.message}\")<\/span>\n   String systemMessage;\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value(\"#{new Integer ('${chat.memory.maxMessages}')}\")<\/span>\n   Integer maxMessages;\n\n\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Override<\/span>\n   <span class=\"has-inline-color has-vivid-red-color\">public <\/span>MessageWindowChatMemory build<span class=\"has-inline-color has-vivid-purple-color\">()<\/span> {\n       <span class=\"has-inline-color has-vivid-purple-color\">MessageWindowChatMemory<\/span> <span class=\"has-inline-color has-vivid-green-cyan-color\">memory <\/span>= MessageWindowChatMemory.withMaxMessages(maxMessages);\n       <span class=\"has-inline-color has-vivid-purple-color\">String <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">text <\/span>= <span class=\"has-inline-color has-vivid-red-color\">new <\/span>String(systemMessage);\n       <span class=\"has-inline-color has-vivid-purple-color\">SystemMessage <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">systemMessage <\/span>= SystemMessage.from(text);\n       memory.add(systemMessage);\n       <span class=\"has-inline-color has-vivid-red-color\">return <\/span>memory;\n   }\n}<\/code><\/pre>\n\n\n\n<p id=\"5ce8\">Keep attention on externally configured&nbsp;<em>SystemMessage<\/em>, typically used to instruct LLM regarding the AI\u2019s actions, such as its behavior or response style.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"6e68\">Lanchain4j RAG Overview<\/h3>\n\n\n\n<p id=\"8189\"><a href=\"https:\/\/docs.langchain4j.dev\/\" rel=\"noreferrer noopener\" target=\"_blank\">Langchain4j<\/a>&nbsp;framework provides built-in support for RAG ingestion and retrieval chains.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"538\" src=\"https:\/\/kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-1-1024x538.jpeg\" alt=\"\" class=\"wp-image-4824\" srcset=\"https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-1-1024x538.jpeg 1024w, https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-1-300x158.jpeg 300w, https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-1-768x403.jpeg 768w, https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-1-1536x807.jpeg 1536w, https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-1-2048x1076.jpeg 2048w, https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-1-360x189.jpeg 360w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption><strong>Image credit:<\/strong><a href=\"https:\/\/docs.langchain4j.dev\/tutorials\/rag\" rel=\"noreferrer noopener\" target=\"_blank\">Langchain4j<\/a><\/figcaption><\/figure>\n\n\n\n<p>The first step of the ingesting chain is document loading and parsing, aimed to achieve&nbsp;<em>Document&nbsp;<\/em>representation independent of the original location and format. Langchain4j provides&nbsp;<em>DocumentLoader<\/em>&nbsp;for loading documents (using&nbsp;<em>Document<\/em>&nbsp;representation) from many sources (<a href=\"https:\/\/aws.amazon.com\/s3\/\" rel=\"noreferrer noopener\" target=\"_blank\">S3 bucket<\/a>&nbsp;,&nbsp;<a href=\"https:\/\/azure.microsoft.com\/en-us\/products\/storage\/blobs\" rel=\"noreferrer noopener\" target=\"_blank\">Azure Blob storage<\/a>,&nbsp;<a href=\"https:\/\/www.tencentcloud.com\/products\/cos\" rel=\"noreferrer noopener\" target=\"_blank\">Tencent Cloud Object Storage<\/a>&nbsp;, GitHub, websites, filesystem) in different formats (text, HTML, PDF, Microsoft documents).<\/p>\n\n\n\n<p class=\"has-text-align-center\"><img decoding=\"async\" src=\"https:\/\/lh7-us.googleusercontent.com\/adBEW-_LY0tMpxclRGFtsFt8eO9xgRsR_7p3hsLSsQOuRuNaTjT1dvmaw34q42AcJ6YPuNMao-hTfi3JTrgbt18OqZSF06zJV-q6sKUBVvCgclRIHtc8-rc-gfNIBLv7BMh5cnzCzdSOiOOhZg6vgR8\" style=\"width: 800px;\"><\/p>\n\n\n\n<p id=\"5002\">The next step is splitting (or chunking) a&nbsp;<em>Document<\/em>&nbsp;into a&nbsp;<em>List&lt;TextSegment&gt;<\/em>. Vector embeddings will be calculated using&nbsp;<em>EmbeddingModel<\/em>&nbsp;for each segment and then stored in some vector database.&nbsp;<em>EmbeddingModel<\/em>&nbsp;interface wraps different implementations like&nbsp;<em>OpenAiEmbeddingModel<\/em>,<\/p>\n\n\n\n<p id=\"7620\"><em>AzureOpenAiEmbeddingModel<\/em>,&nbsp;<em>OllamaEmbeddingModel<\/em>&nbsp;etc., aimed to convert words, sentences, or documents into embeddings. We will use a popular sentence-transformers embedding model,&nbsp;<a href=\"https:\/\/huggingface.co\/sentence-transformers\/all-MiniLM-L6-v2\" rel=\"noreferrer noopener\" target=\"_blank\">all-MiniLM-L6-v2<\/a>, as it can run within our Java application\u2019s process.<\/p>\n\n\n\n<p id=\"fc44\">Actually, Langchain4j supports a broad selection of vector databases wrapped by the&nbsp;<em>EmbeddingStore&lt;Embedded&gt;<\/em>&nbsp;interface:<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:875\/1*DoHQkpQ-7AiGg-36rF7luw.png\" alt=\"\"\/><\/figure>\n\n\n\n<p id=\"3664\"><em>InMemoryEmbeddingStore&lt;Embedded&gt;<\/em>, which stores embeddings in memory, is also worth mentioning as a useful rapid prototyping instrument.<\/p>\n\n\n\n<p id=\"d284\">In our project, we will use&nbsp;<a href=\"https:\/\/github.com\/pgvector\/pgvector\" rel=\"noreferrer noopener\" target=\"_blank\">Pgvector<\/a>&nbsp;for&nbsp;<a href=\"https:\/\/www.postgresql.org\/docs\/15\/index.html\" rel=\"noreferrer noopener\" target=\"_blank\">Postgresql v.15<\/a>, so after&nbsp;<a href=\"https:\/\/github.com\/pgvector\/pgvector\" rel=\"noreferrer noopener\" target=\"_blank\">Pgvector extension installation&nbsp;<\/a>(via&nbsp;<strong>CREATE EXTENSION vector<\/strong>), we will be able to use&nbsp;<em>vector<\/em>&nbsp;type and&nbsp;<em>vectors<\/em>&nbsp;table:<\/p>\n\n\n\n<p class=\"has-text-align-center\"><img decoding=\"async\" src=\"https:\/\/lh7-us.googleusercontent.com\/3KAkBZWUuQuqwUI1d3RWQ4Vf_26Wo0kQdR4_9LqZDtmea71qz5OmVxL3iNZ_12XBUPUG2ZWdLKARsN8ElbG89YtJBb2UDp0yYTD3B_NGd7BQp4rR3O7b8-08uyvneeYzRRe3zuj8StF6KvAXhBWOIsw\" style=\"width: 700px;\"><\/p>\n\n\n\n<p id=\"927b\">During this step, don`t forget:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>To add&nbsp;<a href=\"https:\/\/mvnrepository.com\/artifact\/dev.langchain4j\/langchain4j-pgvector\/0.28.0\" rel=\"noreferrer noopener\" target=\"_blank\">Langchain4j-pgvector<\/a>&nbsp;Maven dependency<\/li><li>To add&nbsp;<a href=\"https:\/\/github.com\/pgvector\/pgvector\" rel=\"noreferrer noopener\" target=\"_blank\">Pgvector<\/a>&nbsp;configuration parameters into your application properties, like shown here:<\/li><\/ul>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-luminous-vivid-amber-color\">@Service<\/span>\n<span class=\"has-inline-color has-luminous-vivid-amber-color\">@RequiredArgsConstructor<\/span>\n<span class=\"has-inline-color has-luminous-vivid-amber-color\">@Slf4j<\/span>\n<span class=\"has-inline-color has-vivid-red-color\">public class<\/span> EmbeddingStoreServiceBuilderImpl <span class=\"has-inline-color has-vivid-cyan-blue-color\">implements EmbeddingStoreServiceBuilder<\/span> {\n\n\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value<\/span>(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"<\/span>${pgvector.host}<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"<\/span>)\n   <span class=\"has-inline-color has-vivid-red-color\">private <\/span>String host;\n  <span class=\"has-inline-color has-luminous-vivid-amber-color\"> @Value<\/span>(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"#{new Integer ('<\/span>${pgvector.port}<span class=\"has-inline-color has-luminous-vivid-orange-color\">')}\"<\/span>)\n   <span class=\"has-inline-color has-vivid-red-color\">private <\/span>Integer port;\n  <span class=\"has-inline-color has-luminous-vivid-amber-color\"> @Value<\/span>(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"<\/span>${pgvector.user}<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"<\/span>)\n   <span class=\"has-inline-color has-vivid-red-color\">private <\/span>String user;\n  <span class=\"has-inline-color has-vivid-red-color\"> <\/span><span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value<\/span>(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"<\/span>${pgvector.password}<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"<\/span>)\n   <span class=\"has-inline-color has-vivid-red-color\">private <\/span>String password;\n  <span class=\"has-inline-color has-luminous-vivid-amber-color\"> @Value<\/span>(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"<\/span>${pgvector.database}<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"<\/span>)\n   <span class=\"has-inline-color has-vivid-red-color\">private <\/span>String database;\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value<\/span>(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"<\/span>${pgvector.table}<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"<\/span>)\n   <span class=\"has-inline-color has-vivid-red-color\">private <\/span>String table;\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value<\/span>(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"#{new Boolean('<\/span>${pgvector.droptable}<span class=\"has-inline-color has-luminous-vivid-orange-color\">')}\"<\/span>)\n   <span class=\"has-inline-color has-vivid-red-color\">private <\/span><span class=\"has-inline-color has-vivid-purple-color\">Boolean <\/span>dropTableFirst;\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value<\/span>(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"#{new Boolean('<\/span>${pgvector.useindex}<span class=\"has-inline-color has-luminous-vivid-orange-color\">')}\"<\/span>)\n   <span class=\"has-inline-color has-vivid-red-color\">private <\/span><span class=\"has-inline-color has-vivid-purple-color\">Boolean <\/span>useIndex;\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value<\/span>(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"#{new Integer ('<\/span>${pgvector.dimension}<span class=\"has-inline-color has-luminous-vivid-orange-color\">')}\"<\/span>)\n   <span class=\"has-inline-color has-vivid-red-color\">private <\/span>Integer dimension;\n\n\n\n\n   <span class=\"has-inline-color has-vivid-red-color\">public <\/span>EmbeddingStore&lt;TextSegment&gt; build() {\n       EmbeddingStore&lt;TextSegment&gt; embeddingStore = PgVectorEmbeddingStore\n               .builder()\n. . .<\/code><\/pre>\n\n\n\n<p>Once our data have been ingested into the&nbsp;<em>EmbeddingStore&lt;Embedded&gt;<\/em>, a bot user can query relevant information.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"538\" src=\"https:\/\/kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-2-1-1024x538.jpeg\" alt=\"\" class=\"wp-image-4833\" srcset=\"https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-2-1-1024x538.jpeg 1024w, https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-2-1-300x158.jpeg 300w, https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-2-1-768x403.jpeg 768w, https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-2-1-1536x807.jpeg 1536w, https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-2-1-2048x1076.jpeg 2048w, https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-2-1-360x189.jpeg 360w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption><strong>Image credit:<\/strong><a href=\"https:\/\/docs.langchain4j.dev\/tutorials\/rag\" rel=\"noreferrer noopener\" target=\"_blank\">Langchain4j<\/a><\/figcaption><\/figure>\n\n\n\n<p id=\"91e0\">Langchain4j release 0.28.0 supports several retrieval scenarios:<\/p>\n\n\n\n<ol class=\"wp-block-list\"><li>Naive RAG via ConversationalRetrievalChain<\/li><\/ol>\n\n\n\n<p id=\"bdde\">2. Advanced RAG:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Based on query compression<\/li><li>Based on using query routing to direct a user query to the most appropriate&nbsp;<em>EmbeddingStore&lt;Embedded&gt;<\/em>, in a case when a knowledge base is spread across multiple sources<\/li><li>Based on reranking API supported by&nbsp;<a href=\"https:\/\/docs.cohere.com\/reference\/rerank-1\" rel=\"noreferrer noopener\" target=\"_blank\">Cohere<\/a><\/li><\/ul>\n\n\n\n<p id=\"6e2c\">In our project, we will use retrieval based on a query compression approach.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"e015\">Step 3: Implement an ingesting infrastructure<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"a631\">Parsing<\/h3>\n\n\n\n<p id=\"c291\">File parsing aims to obtain a Document abstraction by removing original format-specific data and remaining the unstructured content of a single file:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-vivid-red-color\">public <\/span>Document parse(<span class=\"has-inline-color has-vivid-purple-color\">Path documentPath<\/span>) {\n       String extension = getFileExtension(documentPath);\n       DocumentParser documentParser;\n       <span class=\"has-inline-color has-vivid-red-color\">switch <\/span>(extension) {\n           <span class=\"has-inline-color has-vivid-red-color\">case <\/span><span class=\"has-inline-color has-luminous-vivid-orange-color\">\"pdf\"<\/span>:\n               documentParser = <span class=\"has-inline-color has-vivid-red-color\">new <\/span>ApachePdfBoxDocumentParser();\n               <span class=\"has-inline-color has-vivid-red-color\">break<\/span>;\n           <span class=\"has-inline-color has-vivid-red-color\">case <\/span><span class=\"has-inline-color has-luminous-vivid-orange-color\">\"doc\"<\/span>, <span class=\"has-inline-color has-luminous-vivid-orange-color\">\"xlsx\"<\/span>, <span class=\"has-inline-color has-luminous-vivid-orange-color\">\"docx\"<\/span>, <span class=\"has-inline-color has-luminous-vivid-orange-color\">\"xls\"<\/span>, <span class=\"has-inline-color has-luminous-vivid-orange-color\">\"ppt\"<\/span>, <span class=\"has-inline-color has-luminous-vivid-orange-color\">\"pptx\"<\/span>:\n               documentParser = <span class=\"has-inline-color has-vivid-red-color\">new <\/span>ApachePoiDocumentParser();\n               <span class=\"has-inline-color has-vivid-red-color\">break<\/span>;\n           <span class=\"has-inline-color has-luminous-vivid-amber-color\">default<\/span>:\n               documentParser = <span class=\"has-inline-color has-vivid-red-color\">new <\/span>TextDocumentParser();\n       }\n        <span class=\"has-inline-color has-vivid-red-color\">return <\/span>document = FileSystemDocumentLoader.loadDocument(documentPath, documentParser);\n   }<\/code><\/pre>\n\n\n\n<p>It is important to note that in HTML case, we need to clear markup using&nbsp;<em>HtmlTextExtractor<\/em>, like in the following way:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-vivid-red-color\">public <\/span>Document extractHtmL(<span class=\"has-inline-color has-vivid-purple-color\">String urlString<\/span>) {\n<span class=\"has-inline-color has-vivid-purple-color\">URL <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">url <\/span>= formUrl(urlString);\n<span class=\"has-inline-color has-vivid-purple-color\">Document <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">htmlDocument <\/span>= UrlDocumentLoader.load(url, <span class=\"has-inline-color has-vivid-red-color\">new <\/span>TextDocumentParser());\n   <span class=\"has-inline-color has-vivid-purple-color\">HtmlTextExtractor <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">transformer <\/span>= <span class=\"has-inline-color has-vivid-red-color\">new <\/span>HtmlTextExtractor(<span class=\"has-inline-color has-vivid-red-color\">null<\/span>, <span class=\"has-inline-color has-vivid-red-color\">null<\/span>, <span class=\"has-inline-color has-vivid-red-color\">true<\/span>);\n   <span class=\"has-inline-color has-vivid-purple-color\">Document <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">transformedDocument <\/span>= transformer.transform(htmlDocument);\n   <span class=\"has-inline-color has-vivid-red-color\">return <\/span>transformedDocument;\n}<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"26a1\">Splitting and Ingesting<\/h3>\n\n\n\n<p id=\"bf9c\">For generic text splitting, Langchain4j recommends using recursive&nbsp;<em>DocumentSplitter<\/em>. It tries to split the document into paragraphs first and fits as many paragraphs into a single&nbsp;<em>TextSegment<\/em>&nbsp;as possible. If some paragraphs are too long, they are recursively split into lines, then sentences, then words, and then characters until they fit into a segment. The code below illustrates what recursive&nbsp;<em>DocumentSplitter<\/em>&nbsp;could look like, how to use<em>&nbsp;EmbeddingModel<\/em>&nbsp;like&nbsp;<a href=\"https:\/\/huggingface.co\/sentence-transformers\/all-MiniLM-L6-v2\" rel=\"noreferrer noopener\" target=\"_blank\">all-MiniLM-L6-v2<\/a>, and how to store&nbsp;<em>List&lt;Embedding&gt;<\/em>&nbsp;and&nbsp;<em>List&lt;TextSegment&gt;<\/em>&nbsp;into our&nbsp;<a href=\"https:\/\/github.com\/pgvector\/pgvector\" rel=\"noreferrer noopener\" target=\"_blank\">Pgvector<\/a>&nbsp;store.<\/p>\n\n\n\n<p id=\"4211\">The code also illustrates how to add externally configurable parameters for DocumentSplitter:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Integer&nbsp;<em>maxSegmentSizeInChars<\/em>&nbsp;\u2014 The maximum size of the segment, defined in characters.<\/li><li>Integer&nbsp;<em>maxOverlapSizeInTokens<\/em>&nbsp;\u2014 The maximum size of the overlap, defined in characters.<\/li><\/ul>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-luminous-vivid-amber-color\">@NonNull<\/span>\n<span class=\"has-inline-color has-vivid-red-color\">private final<\/span> EmbeddingStoreServiceBuilder embeddingStoreServiceBuilder;\n<span class=\"has-inline-color has-luminous-vivid-amber-color\">@NonNull<\/span>\n<span class=\"has-inline-color has-vivid-red-color\">private final<\/span> EmbeddingModelBuilder embeddingModelBuilder;\n\n\n<span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value(\"#{new Integer ('${openai.document.splitter.maxSegmentSizeInChars}')}\")<\/span>\nInteger maxSegmentSizeInChars;\n<span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value(\"#{new Integer ('${openai.document.splitter.maxOverlapSizeInTokens}')}\")<\/span>\nInteger maxOverlapSizeInTokens;\n\n\n<span class=\"has-inline-color has-vivid-red-color\">public void<\/span> IngestDocument(<span class=\"has-inline-color has-vivid-purple-color\">Document document<\/span>) {\n   IngestDocument(document, maxSegmentSizeInChars, maxOverlapSizeInTokens);\n}\n\n\n<span class=\"has-inline-color has-vivid-red-color\">public void<\/span> IngestDocument(<span class=\"has-inline-color has-vivid-purple-color\">Document document<\/span>, <span class=\"has-inline-color has-vivid-purple-color\">Integer maxSegmentSize<\/span>, <span class=\"has-inline-color has-vivid-purple-color\">Integer maxOverlapSize<\/span>) {\n   <span class=\"has-inline-color has-vivid-purple-color\">EmbeddingModel <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">embeddingModel <\/span>= embeddingModelBuilder.build();\n   EmbeddingStore&lt;TextSegment&gt; pgVectorEmbeddingStore = embeddingStoreServiceBuilder.build();\n\n\n   <span class=\"has-inline-color has-vivid-purple-color\">DocumentSplitter <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">splitter <\/span>= DocumentSplitters.recursive(maxSegmentSize, maxOverlapSize);\n   List&lt;TextSegment&gt; segments = splitter.split(document);\n   log.debug(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"Text segments: {}   {}\"<\/span>, segments);\n\n\n   List&lt;Embedding&gt; embeddings = embeddingModel.embedAll(segments).content();\n   pgVectorEmbeddingStore.addAll(embeddings, segments);\n   log.debug(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"Ingested document : {}\"<\/span>, document);\n}<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"4391\">Step 4: Implement retrieval infrastructure<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"27eb\">Compressing Query<\/h3>\n\n\n\n<p id=\"b5b6\">This technique is a way of reducing the noise in the retrieved documents by \u201ccompressing\u201d irrelevant information. The idea behind this method goes back to&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2310.04408\" rel=\"noreferrer noopener\" target=\"_blank\">RECOMP<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2310.05736\" rel=\"noreferrer noopener\" target=\"_blank\">LLMLingua<\/a>&nbsp;publications.<\/p>\n\n\n\n<p class=\"has-text-align-center\"><img decoding=\"async\" src=\"https:\/\/lh7-us.googleusercontent.com\/O-0WTsu5_ctbJsEAZ1oQwCfyIaets7huwnHcmq2qLGZq80T6-7hrx30t7xxfwDnn4v-J1imvs-pIoZki3HCvKLANFS_6kxqYn4-lsDRC35ojzwUOZ6yVzcaD848elUZkFLmedwLvLQqI4FeemlFHIGc\" style=\"width: 700px;\"><\/p>\n\n\n\n<p class=\"has-text-align-center\">image credits&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2310.04408\" rel=\"noreferrer noopener\" target=\"_blank\">RECOMP<\/a><\/p>\n\n\n\n<p>Query compression involves taking the user\u2019s query and the preceding conversation, then asking the LLM to \u201ccompress\u201d it into a single, self-contained query. Thus, the method adds a bit of latency and cost but could significantly enhance the quality of the RAG process. It\u2019s worth noting that the LLM used for compression doesn\u2019t have to be the same as the one used for conversation.&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2311.11045\" rel=\"noreferrer noopener\" target=\"_blank\">For instance, you might use a local Small Language Model (<strong>SLM<\/strong>), like Orca 2 (7B &amp; 13B)<\/a>&nbsp;for summarization.<\/p>\n\n\n\n<p class=\"has-text-align-center\"><img decoding=\"async\" src=\"https:\/\/lh7-us.googleusercontent.com\/zUqBoaQpjT3MMNiZUG04mKRzGzP91iVYA2g2QkFW0q8Kb-Ov97AbsOY_wus4x7kpUv_uY3I6GjjPSJVjAz5zPcYSM5koLzcs7r1X4XVDMHig2uc3Upk61bTe6VDVrvnxO03ksY4p2nNEL_cXkR8hGo4\" style=\"width: 700px;\"><\/p>\n\n\n\n<p class=\"has-text-align-center\">image credits&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2311.11045\" rel=\"noreferrer noopener\" target=\"_blank\">Orca2<\/a><\/p>\n\n\n\n<p>In Langchain4j, query compression is implemented as&nbsp;<em>PromptTemplate<\/em>&nbsp;aimed to summarize the state of chat memory, user query and relevant information retrieved from&nbsp;<em>EmbeddingStore<\/em>. The code below shows the default summarization&nbsp;<em>PromptTemplate<\/em>:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code> \"Read and understand the conversation between the User and the AI. \" +\n               \"Then, analyze the new query from the User. \" +\n               \"Identify all relevant details, terms, and context from both the conversation and the new query. \" +\n               \"Reformulate this query into a clear, concise, and self-contained format suitable for information retrieval.\\n\" +\n               \"\\n\" +\n               \"Conversation:\\n\" +\n               \"{{chatMemory}}\\n\" +\n               \"\\n\" +\n               \"User query: {{query}}\\n\" +\n               \"\\n\" +\n               \"It is very important that you provide only reformulated query and nothing else! \" +\n               \"Do not prepend a query with anything!\"<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"18a4\">Configure Content Retriever<\/h3>\n\n\n\n<p id=\"15aa\">We will use externally configured thresholds to define the \u201clevel of information relevance\u201d in the following way:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-luminous-vivid-amber-color\">@Slf4j<\/span>\n<span class=\"has-inline-color has-luminous-vivid-amber-color\">@RequiredArgsConstructor<\/span>\n<span class=\"has-inline-color has-vivid-red-color\">public class<\/span> ContentRetrieverBuilderImpl <span class=\"has-inline-color has-vivid-red-color\">implements <\/span>ContentRetrieverBuilder {\n   <span class=\"has-inline-color has-vivid-red-color\">private final<\/span> EmbeddingStoreServiceBuilder embeddingStoreServiceBuilder;\n   <span class=\"has-inline-color has-vivid-red-color\">private final<\/span> EmbeddingModelBuilder embeddingModelBuilder;\n\n\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value(\"#{new Double ('${rag.contentRetriever.minScore}')}\")<\/span>\n   Double minScore;\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value(\"#{new Integer ('${rag.contentRetriever.maxResults}')}\")<\/span>\n   Integer maxResults;\n\n\n  <span class=\"has-inline-color has-luminous-vivid-amber-color\"> @Override<\/span>\n   <span class=\"has-inline-color has-vivid-red-color\">public <\/span>ContentRetriever build<span class=\"has-inline-color has-vivid-purple-color\">()<\/span> {\n       <span class=\"has-inline-color has-vivid-purple-color\">EmbeddingStore <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">embeddingStore <\/span>= embeddingStoreServiceBuilder.build();\n       <span class=\"has-inline-color has-vivid-purple-color\">EmbeddingModel <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">embeddingModel <\/span>= embeddingModelBuilder.build();\n       <span class=\"has-inline-color has-vivid-purple-color\">ContentRetriever <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">contentRetriever <\/span>= EmbeddingStoreContentRetriever.builder()\n               .embeddingStore(embeddingStore)\n               .embeddingModel(embeddingModel)\n               .maxResults(maxResults)\n               .minScore(minScore)\n               .build();\n       <span class=\"has-inline-color has-vivid-red-color\">return <\/span>contentRetriever;<\/code><\/pre>\n\n\n\n<p id=\"2f17\">Where:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>The threshold configuration could look like this:<\/li><\/ul>\n\n\n\n<pre class=\"wp-block-code\"><code>rag.contentRetriever.maxResults=<span class=\"has-inline-color has-vivid-cyan-blue-color\">3<\/span>\nrag.contentRetriever.minScore=<span class=\"has-inline-color has-vivid-cyan-blue-color\">0.59<\/span><\/code><\/pre>\n\n\n\n<ul class=\"wp-block-list\"><li><em>EmbeddingStoreServiceBuilder<\/em>&nbsp;builds&nbsp;<em>pgVectorEmbeddingStore<\/em><\/li><li><em>EmbeddingModelBuilder<\/em>&nbsp;builds&nbsp;<a href=\"https:\/\/huggingface.co\/sentence-transformers\/all-MiniLM-L6-v2\" rel=\"noreferrer noopener\" target=\"_blank\">all-MiniLM-L6-v2<\/a>&nbsp;as described earlier (see Step 3 for details)<\/li><\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"8d9e\">Step 5: Integration of LLM-powered building blocks with the Telegram bot<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"46b3\">Define Completion Interfaces<\/h3>\n\n\n\n<p id=\"6d25\">ChatService interface is straightforward, supporting both completion modes: synchronous and asynchronous.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-vivid-red-color\">public interface<\/span> ChatService {\n\n   String chat(<span class=\"has-inline-color has-vivid-purple-color\">String message<\/span>);\n   TokenStream chatTokenStream(<span class=\"has-inline-color has-vivid-purple-color\">String message<\/span>);\n}<\/code><\/pre>\n\n\n\n<p id=\"aea7\">\u201cSynchronous (or blocking) completion\u201d means that when our bot sends a request to the LLM API, it waits for the operation to complete and the API to return the response before proceeding with any further actions.<\/p>\n\n\n\n<p id=\"79d2\">\u201cAsynchronous (or streaming) completion\u201d refers to a non-blocking interaction pattern with LLM API, such as OpenAI\u2019s API, where the application does not wait for the API call to complete. LLM instead sends a response partially as a token stream of the response as they are ready.<\/p>\n\n\n\n<p id=\"a24e\">It is important to note that:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Not all LLM\u2019s API support \u201cAsynchronous completion\u201d,<\/li><li>\u201cAsynchronous completion\u201d integration with the Telegram bot, as will be shown below, is quite tricky and challenging<\/li><li>Synchronous completion is comparably simpler but less user-friendly<\/li><\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"c8b1\">Integrate Completion Interfaces into Telegram bot<\/h3>\n\n\n\n<p id=\"3639\">We can start coding our Telegram bot by extending well-known&nbsp;<strong><em>TelegramLongPollingBot<\/em><\/strong>, by adding externally configured botUsername, botToken and implementing&nbsp;<em>onUpdateReceived<\/em>&nbsp;method. Obviously, we also have to implement our listener of&nbsp;<em>ApplicationReadyEvent<\/em>&nbsp;aimed to register our bot after starting our Spring Boot Application. As shown below, the sending of Telegram messages is encapsulated in the following&nbsp;<em>TelegramSenderService<\/em>:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-luminous-vivid-amber-color\">@Component<\/span>\n<span class=\"has-inline-color has-luminous-vivid-amber-color\">@RequiredArgsConstructor<\/span>\n<span class=\"has-inline-color has-luminous-vivid-amber-color\">@Slf4j<\/span>\n<span class=\"has-inline-color has-vivid-red-color\">public class<\/span> TelegramBot <span class=\"has-inline-color has-vivid-red-color\">extends<\/span> TelegramLongPollingBot <span class=\"has-inline-color has-vivid-red-color\">implements <\/span>ApplicationListener&lt;ApplicationReadyEvent&gt; {\n   <span class=\"has-inline-color has-vivid-red-color\">private final<\/span> TelegramSenderService telegramSenderService;\n\n\n<span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value(\"${telegram.bot.name}\")<\/span>\n<span class=\"has-inline-color has-vivid-red-color\">private <\/span>String botUsername;\n\n\n<span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value(\"${telegram.bot.token}\")<\/span>\n<span class=\"has-inline-color has-vivid-red-color\">private <\/span>String botToken;\n\n\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Override<\/span>\n  <span class=\"has-inline-color has-vivid-red-color\"> public void<\/span> onUpdateReceived(<span class=\"has-inline-color has-vivid-purple-color\">Update update<\/span>) {\n       log.debug(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"Update object {}\"<\/span>, update);\n       String question;\n       <span class=\"has-inline-color has-vivid-red-color\">if <\/span>(update.hasMessage() &amp;&amp; update.getMessage().hasText()) {\n           <span class=\"has-inline-color has-vivid-purple-color\">String <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">message_text<\/span> = update.getMessage().getText();\n           <span class=\"has-inline-color has-vivid-purple-color\">long <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">chat_id <\/span>= update.getMessage().getChatId();\n           telegramSenderService.sendMessage(chat_id, question);\n       }\n   }\n\n\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Override<\/span>\n   <span class=\"has-inline-color has-vivid-red-color\">public void<\/span> onApplicationEvent(<span class=\"has-inline-color has-luminous-vivid-amber-color\">@NotNull<\/span> <span class=\"has-inline-color has-vivid-purple-color\">ApplicationReadyEvent event<\/span>) {\n       <span class=\"has-inline-color has-vivid-red-color\">try <\/span>{\n           <span class=\"has-inline-color has-vivid-purple-color\">TelegramBotsApi <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">botsApi <\/span>= <span class=\"has-inline-color has-vivid-red-color\">new <\/span>TelegramBotsApi(DefaultBotSession.class);\n           botsApi.registerBot(<span class=\"has-inline-color has-vivid-purple-color\">this<\/span>);\n       } <span class=\"has-inline-color has-vivid-red-color\">catch <\/span>(TelegramApiException e) {\n           log.error(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"Failed to register telegram bot\"<\/span>, e);\n       }\n   }\n\n\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\"> @Override<\/span>\n    <span class=\"has-inline-color has-vivid-red-color\">public <\/span>String getBotUsername<span class=\"has-inline-color has-vivid-purple-color\">()<\/span> {\n   <span class=\"has-inline-color has-vivid-red-color\">return <\/span>botUsername;\n   }\n\n\n  <span class=\"has-inline-color has-luminous-vivid-amber-color\">  @Override<\/span>\n   <span class=\"has-inline-color has-vivid-red-color\">public <\/span>String getBotToken<span class=\"has-inline-color has-vivid-purple-color\">()<\/span> {\n   <span class=\"has-inline-color has-vivid-red-color\">return <\/span>botToken;\n   }\n\n\n}<\/code><\/pre>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"553e\">TelegramSenderService: \u201cSynchronous completion\u201d case<\/h4>\n\n\n\n<p id=\"2f2b\">On the code below, it is worth paying attention to:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><em>parseMode<\/em><\/li><li><em>sendTypingEffect method<\/em><\/li><\/ul>\n\n\n\n<p id=\"6753\">The&nbsp;<em>parseMode<\/em>&nbsp;is useful in cases where LLM answers to query using&nbsp;<a href=\"https:\/\/www.markdownguide.org\/basic-syntax\/\" rel=\"noreferrer noopener\" target=\"_blank\">Markdown<\/a>. As is known, LLMs like&nbsp;<a href=\"https:\/\/platform.openai.com\/docs\/models\/gpt-4-and-gpt-4-turbo\" rel=\"noreferrer noopener\" target=\"_blank\">OpenAI gpt-4<\/a>&nbsp;,&nbsp;<a href=\"https:\/\/ollama.com\/library\/gemma\" rel=\"noreferrer noopener\" target=\"_blank\">Ollama gemma<\/a>&nbsp;are able to answer using Markdown syntax. For such models we can configure our application in the following way:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>telegram.bot.parsemode=Markdown<\/code><\/pre>\n\n\n\n<p>The&nbsp;<em>sendTypingEffect<\/em>&nbsp;method enhances user experience via \u201csimulation of typing\u201d as a reaction to a user query.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-luminous-vivid-amber-color\">@Slf4j<\/span>\n<span class=\"has-inline-color has-luminous-vivid-amber-color\">@Service<\/span>\n<span class=\"has-inline-color has-vivid-red-color\">public class<\/span> TelegramSenderServiceImpl <span class=\"has-inline-color has-vivid-red-color\">extends <\/span>DefaultAbsSender <span class=\"has-inline-color has-vivid-red-color\">implements <\/span>TelegramSenderService {\n\n\n  <span class=\"has-inline-color has-vivid-red-color\"> private final<\/span> ChatService chatService;\n\n\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value(\"${telegram.bot.parsemode:#{null}}\")<\/span>\n   <span class=\"has-inline-color has-vivid-red-color\">private <\/span>String parseMode;\n\n\n  \n<span class=\"has-inline-color has-luminous-vivid-amber-color\">@Override<\/span>\n<span class=\"has-inline-color has-vivid-red-color\">public void<\/span> sendMessage(<span class=\"has-inline-color has-vivid-purple-color\">Long chatId<\/span>, <span class=\"has-inline-color has-vivid-purple-color\">String text<\/span>) {\n   sendTypingEffect(chatId);\n   sendMessage(chatId, chatService.chat(text), parseMode);\n}\n\n\n   <span class=\"has-inline-color has-vivid-red-color\">protected <\/span>TelegramSenderServiceImpl(<span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value(\"${telegram.bot.token}\"<\/span>)<span class=\"has-inline-color has-vivid-purple-color\"> String botToken, ChatService chatService<\/span>) {\n       <span class=\"has-inline-color has-vivid-purple-color\">super<\/span>(<span class=\"has-inline-color has-vivid-red-color\">new <\/span>DefaultBotOptions(), botToken);\n       <span class=\"has-inline-color has-vivid-purple-color\">this<\/span>.chatService = chatService;\n   }\n\n\n     <span class=\"has-inline-color has-vivid-red-color\">private void<\/span> sendMessage(<span class=\"has-inline-color has-vivid-purple-color\">Long chatId, String text, String parseMode<\/span>) {\n       <span class=\"has-inline-color has-vivid-purple-color\">SendMessage <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">message <\/span>= <span class=\"has-inline-color has-vivid-red-color\">new <\/span>SendMessage();\n       message.setChatId(String.valueOf(chatId));\n       message.setParseMode(parseMode);\n       message.setText(text);\n\n\n       log.debug(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"Message to send {} \"<\/span>, message);\n       <span class=\"has-inline-color has-vivid-red-color\">try <\/span>{\n           execute(message);\n       } <span class=\"has-inline-color has-vivid-red-color\">catch <\/span>(TelegramApiException e) {\n           log.error(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"TelegramApiException\"<\/span>, e);\n       }\n   }\n\n\n   <span class=\"has-inline-color has-vivid-red-color\">private void<\/span> sendTypingEffect(<span class=\"has-inline-color has-vivid-purple-color\">Long chatId<\/span>) {\n       <span class=\"has-inline-color has-vivid-purple-color\">SendChatAction <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">chatAction <\/span>= <span class=\"has-inline-color has-vivid-red-color\">new <\/span>SendChatAction(chatId.toString(), ActionType.TYPING.name(), <span class=\"has-inline-color has-vivid-red-color\">null<\/span>);\n       <span class=\"has-inline-color has-vivid-red-color\">try <\/span>{\n           execute(chatAction);\n       } <span class=\"has-inline-color has-vivid-red-color\">catch <\/span>(TelegramApiException e) {\n           log.error(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"TelegramApiException\"<\/span>, e);\n       }\n   }\n\n\n. . .<\/code><\/pre>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"d952\">TelegramSenderService: \u201cStreaming completion\u201d case<\/h4>\n\n\n\n<p id=\"6f84\">\u201cStreaming completion\u201d implementation using Telegram API is based on the following:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>It is useless and irrational to send to Telegram each string received from&nbsp;<em>TokenStream<\/em><strong><em>&nbsp;chatTokenStream<\/em><\/strong><em>(String message)<\/em>&nbsp;in response to the user query. A Telegram screen would be really impractical for the user as it would contain isolated messages with different headers and time stamps<\/li><li>To avoid this issue, we came up with the idea of tokens buffering by grouping a portion of tokens and updating it as the same message with the same header and timestamp. To implement this approach, we can use the following method:<\/li><\/ul>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-vivid-red-color\">public void<\/span> updateMessage(<span class=\"has-inline-color has-vivid-purple-color\">Long id, Integer msgId, String what<\/span>) {\n   <span class=\"has-inline-color has-vivid-purple-color\">EditMessageText <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">editMessageText <\/span>= EditMessageText.builder()\n           .chatId(String.valueOf(id))\n           .messageId(msgId)\n           .text(what)\n           .parseMode(parseMode)\n           .build();\n\n\n   log.debug(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"Edited message to send {} \"<\/span>, editMessageText);\n   <span class=\"has-inline-color has-vivid-red-color\">try <\/span>{\n       execute(editMessageText);\n   } <span class=\"has-inline-color has-vivid-red-color\">catch <\/span>(TelegramApiException e) {\n       log.error(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"Message is not edited.\"<\/span>, e);\n   }\n}<\/code><\/pre>\n\n\n\n<ul class=\"wp-block-list\"><li>The challenge here is the risk of receiving Telegram API error 429. To minimize such a risk, we have to organize a\u201cdelay\u201d between sending our&nbsp;<strong>buffered portion<\/strong>&nbsp;of tokens and receiving from&nbsp;<em>TokenStream<\/em><\/li><\/ul>\n\n\n\n<p id=\"bf09\">The implementation idea is shown in the picture below<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:875\/0*Ley6ZQx4l-leeqgq\" alt=\"\"\/><\/figure>\n\n\n\n<p id=\"c026\">So, we could implement our vision in the following way:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Changing one line of code of&nbsp;<em>onUpdateReceived<\/em>&nbsp;method of&nbsp;<em>TelegramBot<\/em>:<\/li><\/ul>\n\n\n\n<p id=\"e347\">\u2014 from&nbsp;<em>telegramSenderService.sendMessage(chat_id, question)<\/em>;<\/p>\n\n\n\n<p id=\"44eb\">\u2014 to<em>&nbsp;telegramSenderService.streamMessageToBot(chat_id, question)<\/em>;<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Adding configurable size of buffered portion of tokens<\/li><li>Adding configurable \u201cdelay\u201d between sending such a buffered portion<\/li><\/ul>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value(\"${telegram.bot.amountChunk:#{10}}\")<\/span>\n<span class=\"has-inline-color has-vivid-red-color\">private <\/span>Integer amountChunk;\n\n\n<span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value(\"${telegram.bot.waiting:#{600}}\")<\/span>\n<span class=\"has-inline-color has-vivid-red-color\">private <\/span>Long waiting;<\/code><\/pre>\n\n\n\n<ul class=\"wp-block-list\"><li>Implementing dedicated&nbsp;<em>StreamingResponseHandler&lt;AiMessage&gt;()with onNext<\/em>&nbsp;<em>onComplete<\/em>&nbsp;and&nbsp;<em>onError<\/em>&nbsp;methods<\/li><\/ul>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-luminous-vivid-amber-color\">@Override<\/span>\n<span class=\"has-inline-color has-vivid-red-color\">public void<\/span> streamMessageToBot(<span class=\"has-inline-color has-vivid-purple-color\">Long chatId, String question<\/span>) {\n   <span class=\"has-inline-color has-vivid-purple-color\">AtomicBoolean <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">isFirst <\/span>= <span class=\"has-inline-color has-vivid-red-color\">new <\/span>AtomicBoolean(<span class=\"has-inline-color has-vivid-red-color\">true<\/span>);\n   AtomicReference&lt;String&gt; resultAnswerText = <span class=\"has-inline-color has-vivid-red-color\">new <\/span>AtomicReference&lt;&gt;(StringUtils.EMPTY);\n   <span class=\"has-inline-color has-vivid-purple-color\">AtomicInteger <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">loadedChunks <\/span>= <span class=\"has-inline-color has-vivid-red-color\">new <\/span>AtomicInteger(<span class=\"has-inline-color has-vivid-cyan-blue-color\">0<\/span>);\n   AtomicReference&lt;Message&gt; message = <span class=\"has-inline-color has-vivid-red-color\">new <\/span>AtomicReference&lt;&gt;();\n   sendTypingEffect(chatId);\n   <span class=\"has-inline-color has-vivid-purple-color\">TokenStream <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">answer <\/span>= chatService.chatTokenStream(question);\n   answer\n           .onNext(chunk -&gt; aiChunkMessageConsumer(chunk, chatId, isFirst, resultAnswerText, loadedChunks, message))\n           .onComplete(aiMessageResponse -&gt; completeConsumer(loadedChunks.get(), message.get(), resultAnswerText.get(), aiMessageResponse.content().text()))\n           .onError(<span class=\"has-inline-color has-vivid-purple-color\">this<\/span>::errorConsumer)\n           .start();\n}<\/code><\/pre>\n\n\n\n<ul class=\"wp-block-list\"><li>Implementing&nbsp;<em>aiChunkMessageConsumer<\/em>&nbsp;responsible for chunks buffering like in the following code<\/li><\/ul>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-vivid-red-color\">private void<\/span> aiChunkMessageConsumer(<span class=\"has-inline-color has-vivid-purple-color\">String chunk, Long chatId, AtomicBoolean isFirst, AtomicReference&lt;String&gt; resultAnswerText,\n                                   AtomicInteger loadedChunks, AtomicReference&lt;Message&gt; message<\/span>) {\n   <span class=\"has-inline-color has-vivid-red-color\">if <\/span>(StringUtils.isNotBlank(chunk)) {\n       <span class=\"has-inline-color has-vivid-red-color\">if <\/span>(isFirst.get()) {\n           sendFirsChunkOfAnswer(chunk, chatId, isFirst, resultAnswerText, message);\n       } <span class=\"has-inline-color has-vivid-red-color\">else <\/span>{\n           updateMessageByNextChunkOfAnswer(chunk, chatId, loadedChunks, resultAnswerText, message);\n       }\n   }\n}\n\n\n<span class=\"has-inline-color has-vivid-red-color\">private void<\/span> sendFirsChunkOfAnswer(<span class=\"has-inline-color has-vivid-purple-color\">String chunk, Long chatId, AtomicBoolean isFirst,\n                                  AtomicReference&lt;String&gt; resultAnswerText,\n                                  AtomicReference&lt;Message&gt; message<\/span>) {\n   <span class=\"has-inline-color has-vivid-purple-color\">Message <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">returnedMessage <\/span>= sendMessageWithReturn(chatId, chunk, parseMode);\n   <span class=\"has-inline-color has-vivid-red-color\">if <\/span>(returnedMessage == <span class=\"has-inline-color has-vivid-red-color\">null<\/span>) {\n       <span class=\"has-inline-color has-vivid-red-color\">return<\/span>;\n   }\n   message.set(returnedMessage);\n   isFirst.set(<span class=\"has-inline-color has-vivid-red-color\">false<\/span>);\n   resultAnswerText.set(resultAnswerText.get() + chunk);\n}\n\n\n<span class=\"has-inline-color has-vivid-red-color\">private void<\/span> updateMessageByNextChunkOfAnswer(<span class=\"has-inline-color has-vivid-purple-color\">String chunk, Long chatId, AtomicInteger loadedChunks,\n                                             AtomicReference&lt;String&gt; resultAnswerText,\n                                             AtomicReference&lt;Message&gt; message<\/span>) {\n   <span class=\"has-inline-color has-vivid-purple-color\">var <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">msg <\/span>= message.get();\n   resultAnswerText.set(resultAnswerText.get() + chunk);\n   loadedChunks.set(loadedChunks.get() + <span class=\"has-inline-color has-vivid-cyan-blue-color\">1<\/span>);\n   log.debug(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"Loaded chunks: {}\"<\/span>, loadedChunks.get());\n   <span class=\"has-inline-color has-vivid-red-color\">if <\/span>(loadedChunks.get() == amountChunk) {\n       sendTypingEffect(chatId);\n       updateMessage(msg.getChatId(), msg.getMessageId(), resultAnswerText.get());\n       loadedChunks.incrementAndGet();\n       loadedChunks.set(<span class=\"has-inline-color has-vivid-cyan-blue-color\">0<\/span>);\n       sleep(waiting);\n   }\n}<\/code><\/pre>\n\n\n\n<ul class=\"wp-block-list\"><li>Implementing a \u201cDelay\u201d between sending is quick and simple:<\/li><\/ul>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-vivid-red-color\">private void<\/span> sleep(<span class=\"has-inline-color has-vivid-purple-color\">Long duration<\/span>) {\n   <span class=\"has-inline-color has-vivid-red-color\">try <\/span>{\n       Thread.sleep(duration);\n   } <span class=\"has-inline-color has-vivid-red-color\">catch <\/span>(InterruptedException e) {\nlog.error(<span class=\"has-inline-color has-luminous-vivid-orange-color\">\"Failed to register telegram bot\"<\/span>, e);\n   }\n}<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"38e4\">Implement Completion Interfaces<\/h3>\n\n\n\n<p id=\"ef2d\">To implement described above completion interfaces, firstly we have to define two low-level interfaces: Assistant and&nbsp;<em>StreamingAssistant<\/em><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-vivid-red-color\">public interface<\/span> Assistant {\n   String chat(<span class=\"has-inline-color has-vivid-purple-color\">String message<\/span>);\n}<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-vivid-red-color\">public interface<\/span> StreamingAssistant {\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">@SystemMessage({\n<\/span>           <span class=\"has-inline-color has-luminous-vivid-amber-color\">\"You are a customer support agent and personal financial adviser.\",\n           \"Today is {{current_date}}.\"<\/span>\n   <span class=\"has-inline-color has-luminous-vivid-amber-color\">})<\/span>\n   TokenStream chat(<span class=\"has-inline-color has-vivid-purple-color\">String message<\/span>);\n}<\/code><\/pre>\n\n\n\n<p>Secondly, we have to initialize our assistance using several builders explained above (see Step 1 \u2014 Step 3 sections)<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-vivid-red-color\">public class <\/span>ChatServiceImpl <span class=\"has-inline-color has-vivid-red-color\">implements <\/span>ChatService {\n\n\n   <span class=\"has-inline-color has-vivid-red-color\">private final<\/span> OllamaChatModelBuilder ollamaChatModelBuilder;\n   <span class=\"has-inline-color has-vivid-red-color\">private final<\/span> OpenAIChatLanguageModelBuilder chatLanguageModelBuilder;\n   <span class=\"has-inline-color has-vivid-red-color\">private final<\/span> StreamingChatModelBuilder streamingChatModelBuilder;\n   <span class=\"has-inline-color has-vivid-red-color\">private final<\/span> MessageWindowChatMemoryBuilder messageWindowChatMemoryBuilder;\n   <span class=\"has-inline-color has-vivid-red-color\">private final<\/span> ContentRetrieverBuilder contentRetrieverBuilder;\n  <span class=\"has-inline-color has-vivid-red-color\"> private final<\/span> ImageModelBuilder imageModelBuilder;\n   <span class=\"has-inline-color has-vivid-red-color\">private final<\/span> ToolProvider toolProvider;\n\n   <span class=\"has-inline-color has-vivid-red-color\">private<\/span> StreamingAssistant streamingAssistant;\n   <span class=\"has-inline-color has-vivid-red-color\">private<\/span> Assistant assistant;\n\n\n<span class=\"has-inline-color has-vivid-red-color\">private void<\/span> initAssistant() {\n   <span class=\"has-inline-color has-vivid-purple-color\">ChatLanguageModel<\/span>\n <span class=\"has-inline-color has-vivid-green-cyan-color\">chatLanguageModel <\/span>= chatLanguageModelBuilder.build();\n   <span class=\"has-inline-color has-vivid-purple-color\">ContentRetriever <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">contentRetriever <\/span>= contentRetrieverBuilder.build();\n\n\n<span class=\"has-inline-color has-vivid-green-cyan-color\">\/**\nCreating a CompressingQueryTransformer, which compresses the user's query\nand the preceding conversation into a single, stand-alone query.\n*\/<\/span>\n\n\n   <span class=\"has-inline-color has-vivid-purple-color\">QueryTransformer <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">queryTransformer <\/span>= <span class=\"has-inline-color has-vivid-red-color\">new <\/span>CompressingQueryTransformer(chatLanguageModel);\n<span class=\"has-inline-color has-vivid-green-cyan-color\">\/**\nRetrievalAugmentor as the entry point into the customisable RAG flow in LangChain4j\naccording to your requirements.\n*\/<\/span>\n\n\n   <span class=\"has-inline-color has-vivid-purple-color\">RetrievalAugmentor <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">retrievalAugmentor <\/span>= DefaultRetrievalAugmentor.builder()\n           .queryTransformer(queryTransformer)\n           .contentRetriever(contentRetriever)\n           .build();\n   messageWindowChatMemory = messageWindowChatMemoryBuilder.build();\n   assistant = AiServices.builder(Assistant.class)\n           .chatLanguageModel(chatLanguageModel)\n           .chatMemory(messageWindowChatMemory)\n           .tools(toolProvider)\n           .retrievalAugmentor(retrievalAugmentor)\n           .build();\n\n\n   <span class=\"has-inline-color has-vivid-purple-color\">StreamingChatLanguageModel <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">streamingChatModel <\/span>= streamingChatModelBuilder.build();\n\n\n   streamingAssistant = AiServices.builder(StreamingAssistant.class)\n           .streamingChatLanguageModel(streamingChatModel)\n           .chatMemory(messageWindowChatMemory)\n           .tools(toolProvider)\n           .retrievalAugmentor(retrievalAugmentor)\n           .build();\n}<\/code><\/pre>\n\n\n\n<p>Finally, the implementation of completion interfaces could look like the following:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-luminous-vivid-amber-color\">@Override<\/span>\n<span class=\"has-inline-color has-vivid-red-color\">public <\/span>TokenStream chatTokenStream(<span class=\"has-inline-color has-vivid-purple-color\">String message<\/span>) {\n   <span class=\"has-inline-color has-vivid-red-color\">return <\/span>streamingAssistant.chat(message);\n}<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-luminous-vivid-amber-color\">@Override<\/span>\n<span class=\"has-inline-color has-vivid-red-color\">public <\/span>String chat(<span class=\"has-inline-color has-vivid-purple-color\">String message<\/span>) {\n   <span class=\"has-inline-color has-vivid-red-color\">return <\/span>assistant.chat(message);\n}<\/code><\/pre>\n\n\n\n<p id=\"849b\">It is important to pay attention to the following:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Assistant\u2019s implementation in a declarative manner, like:<\/li><\/ul>\n\n\n\n<pre class=\"wp-block-code\"><code>AiServices.builder(Assistant.class), \nAiServices.builder(StreamingAssistant.class)<\/code><\/pre>\n\n\n\n<ul class=\"wp-block-list\"><li><em>ToolProvider<\/em>&nbsp;as a container of methods annotated by&nbsp;<strong>@Tool<\/strong>&nbsp;annotation. It\u2019s a declarative Langchain4j concept, similar to&nbsp;<a href=\"https:\/\/python.langchain.com\/docs\/modules\/agents\/\" rel=\"noreferrer noopener\" target=\"_blank\">Langchain Agent<\/a>.<\/li><\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"9359\">In Closing<\/h2>\n\n\n\n<p id=\"c1a2\">To illustrate the process of going through all 5 steps explained above, let`s consider an example of using a \u201cUser Manual\u201d of a personal financial management solution \u201c<a href=\"https:\/\/wisewallet.ai\/\" rel=\"noreferrer noopener\" target=\"_blank\">WiseWallet<\/a>\u201d. This will demonstrate how an AI-powered Support Chatbot works.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"533\" src=\"https:\/\/kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-4-1024x533.jpeg\" alt=\"\" class=\"wp-image-4827\" srcset=\"https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-4-1024x533.jpeg 1024w, https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-4-300x156.jpeg 300w, https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-4-768x400.jpeg 768w, https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-4-1536x799.jpeg 1536w, https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-4-2048x1065.jpeg 2048w, https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/Tallenta-Frame-4-360x187.jpeg 360w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p id=\"9cb7\">Let`s test one more thing, just for fun: Langchain4j supports Image Models (<strong>IM<\/strong>) like&nbsp;<a href=\"https:\/\/openai.com\/dall-e-3\" rel=\"noreferrer noopener\" target=\"_blank\">Dall-e-3<\/a>. So it\u2019s possible to add image generation features to our bot. To do this, we have to follow the next steps:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Add configurable&nbsp;<em>ImageModelBuilder<\/em>&nbsp;extended as was shown above&nbsp;<em>OpenAIChatModelBuilderParameters<\/em><\/li><\/ul>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value(\"${openai.imageModelName}\")<\/span>\nString imageModelName;\n<span class=\"has-inline-color has-luminous-vivid-amber-color\">@Value(\"${openai.image.quality}\")<\/span>\nString imageQuality;\n\n\n<span class=\"has-inline-color has-luminous-vivid-amber-color\">@Override<\/span>\n<span class=\"has-inline-color has-vivid-red-color\">public <\/span>ImageModel build<span class=\"has-inline-color has-vivid-purple-color\">() <\/span>{\n   <span class=\"has-inline-color has-vivid-purple-color\">ImageModel <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">model <\/span>= OpenAiImageModel.builder()\n           .apiKey(OPENAI_API_KEY)\n           .modelName(imageModelName)\n           .quality(imageQuality)\n           .timeout(ofSeconds(timeoutSec.longValue()))\n           .logRequests(logRequests.booleanValue())\n           .logResponses(logResponses.booleanValue())\n           .build();\n   <span class=\"has-inline-color has-vivid-red-color\">return <\/span>model;\n}<\/code><\/pre>\n\n\n\n<ul class=\"wp-block-list\"><li>Add&nbsp;<strong><em>@Tool<\/em><\/strong>&nbsp;annotated code like the following:<\/li><\/ul>\n\n\n\n<pre class=\"wp-block-code\"><code><span class=\"has-inline-color has-luminous-vivid-amber-color\">@Tool(\"Draw the picture base on following description\")<\/span>\n<span class=\"has-inline-color has-vivid-red-color\">public <\/span>URI generateImageUrl(<span class=\"has-inline-color has-vivid-purple-color\">String description<\/span>) {\n   <span class=\"has-inline-color has-vivid-purple-color\">ImageModel <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">model <\/span>= imageModelBuilder.build();\n\n\n   Response&lt;Image&gt; response = model.generate(description);\n   <span class=\"has-inline-color has-vivid-purple-color\">URI <\/span><span class=\"has-inline-color has-vivid-green-cyan-color\">uri <\/span>= response.content().url();\n   <span class=\"has-inline-color has-vivid-red-color\">return <\/span>uri;\n}<\/code><\/pre>\n\n\n\n<p>As a result, our bot can generate a picture, answering a query like \u201c<em>draw a picture about key features of&nbsp;<\/em><a href=\"https:\/\/wisewallet.ai\/\" rel=\"noreferrer noopener\" target=\"_blank\"><em>WiseWallet<\/em><\/a>\u201d.<\/p>\n\n\n\n<p class=\"has-text-align-center\"><img decoding=\"async\" src=\"https:\/\/lh7-us.googleusercontent.com\/JWRsvQs5sn7maFkA9Jyw1OHXmgKZMUtSxOTc7iiEUJIaEyCFTYyNJJEs7lYamNe-qUTCfbAuD1asmee95zl2B7rnNJjaXdux2oolDE-duBO_eUHwRh60tjMkEPR49W6ZnaxEhlOsydzqQD0A4ys3yrA\" style=\"width: 600px;\"><\/p>\n\n\n\n<p>Enjoy your journey of developing an AI-powered Telegram bot with Langchain4j in Java.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"36cd\">References<\/h2>\n\n\n\n<ol class=\"wp-block-list\"><li><a href=\"https:\/\/core.telegram.org\/api\" rel=\"noreferrer noopener\" target=\"_blank\">https:\/\/core.telegram.org\/api<\/a><\/li><li><a href=\"https:\/\/docs.langchain4j.dev\/tutorials\/\" rel=\"noreferrer noopener\" target=\"_blank\">https:\/\/docs.langchain4j.dev\/tutorials\/<\/a><\/li><li><a href=\"https:\/\/github.com\/langchain4j\/langchain4j-examples\" rel=\"noreferrer noopener\" target=\"_blank\">https:\/\/github.com\/langchain4j\/langchain4j-examples<\/a><\/li><li><a href=\"https:\/\/help.openai.com\/en\/\" rel=\"noreferrer noopener\" target=\"_blank\">https:\/\/help.openai.com\/en\/<\/a><\/li><li><a href=\"https:\/\/github.com\/ollama\/ollama\" rel=\"noreferrer noopener\" target=\"_blank\">https:\/\/github.com\/ollama\/ollama<\/a><\/li><li><a href=\"https:\/\/huggingface.co\/sentence-transformers\" rel=\"noreferrer noopener\" target=\"_blank\">https:\/\/huggingface.co\/sentence-transformers<\/a><\/li><li><a href=\"https:\/\/arxiv.org\/abs\/2310.04408\" rel=\"noreferrer noopener\" target=\"_blank\">https:\/\/arxiv.org\/abs\/2310.04408<\/a><\/li><li><a href=\"https:\/\/arxiv.org\/abs\/2310.05736\" rel=\"noreferrer noopener\" target=\"_blank\">https:\/\/arxiv.org\/abs\/2310.05736<\/a><\/li><li><a href=\"https:\/\/arxiv.org\/abs\/2311.11045\" rel=\"noreferrer noopener\" target=\"_blank\">https:\/\/arxiv.org\/abs\/2311.11045<\/a><\/li><li><a href=\"https:\/\/docs.cohere.com\/reference\/\" rel=\"noreferrer noopener\" target=\"_blank\">https:\/\/docs.cohere.com\/reference\/<\/a><\/li><li><a href=\"https:\/\/platform.openai.com\/docs\/api-reference\/\" rel=\"noreferrer noopener\" target=\"_blank\">https:\/\/platform.openai.com\/docs\/api-reference\/<\/a><\/li><\/ol>\n","protected":false},"excerpt":{"rendered":"<p>Recently updated on May 20, 2025 Introduction or \u201cWhat problem are we solving?\u201d This article aims to illustrate how to create a\u00a0Lego&#8230;<\/p>\n","protected":false},"author":1,"featured_media":4829,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[256],"tags":[],"class_list":{"0":"post-4823","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai"},"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.4 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>5 steps to develop an AI-powered Telegram bot with Langchain4j in Java | Kindgeek<\/title>\n<meta name=\"description\" content=\"A Comprehensive Guide to Building AI-Driven Telegram Bots with Langchain4j in Java\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/kindgeek.com\/blog\/post\/5-steps-to-develop-an-ai-powered-telegram-bot-with-langchain4j-in-java\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"5 steps to develop an AI-powered Telegram bot with Langchain4j in Java | Kindgeek\" \/>\n<meta property=\"og:description\" content=\"A Comprehensive Guide to Building AI-Driven Telegram Bots with Langchain4j in Java\" \/>\n<meta property=\"og:url\" content=\"https:\/\/kindgeek.com\/blog\/post\/5-steps-to-develop-an-ai-powered-telegram-bot-with-langchain4j-in-java\" \/>\n<meta property=\"og:site_name\" content=\"Kindgeek\" \/>\n<meta property=\"article:author\" content=\"https:\/\/www.facebook.com\/kindgeek\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-10T13:49:31+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-05-20T10:48:28+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/LLM-3-e1712757163219.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1105\" \/>\n\t<meta property=\"og:image:height\" content=\"628\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"kindgeek\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"kindgeek\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"12 minutes\" \/>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"5 steps to develop an AI-powered Telegram bot with Langchain4j in Java | Kindgeek","description":"A Comprehensive Guide to Building AI-Driven Telegram Bots with Langchain4j in Java","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/kindgeek.com\/blog\/post\/5-steps-to-develop-an-ai-powered-telegram-bot-with-langchain4j-in-java","og_locale":"en_US","og_type":"article","og_title":"5 steps to develop an AI-powered Telegram bot with Langchain4j in Java | Kindgeek","og_description":"A Comprehensive Guide to Building AI-Driven Telegram Bots with Langchain4j in Java","og_url":"https:\/\/kindgeek.com\/blog\/post\/5-steps-to-develop-an-ai-powered-telegram-bot-with-langchain4j-in-java","og_site_name":"Kindgeek","article_author":"https:\/\/www.facebook.com\/kindgeek","article_published_time":"2024-04-10T13:49:31+00:00","article_modified_time":"2025-05-20T10:48:28+00:00","og_image":[{"width":1105,"height":628,"url":"https:\/\/kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/LLM-3-e1712757163219.png","type":"image\/png"}],"author":"kindgeek","twitter_card":"summary_large_image","twitter_misc":{"Written by":"kindgeek","Est. reading time":"12 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/kindgeek.com\/blog\/post\/5-steps-to-develop-an-ai-powered-telegram-bot-with-langchain4j-in-java#article","isPartOf":{"@id":"https:\/\/kindgeek.com\/blog\/post\/5-steps-to-develop-an-ai-powered-telegram-bot-with-langchain4j-in-java"},"author":{"name":"kindgeek","@id":"https:\/\/kindgeek.com\/blog\/#\/schema\/person\/ac144d1174b0915c3f6ba63048221fc0"},"headline":"5 steps to develop an AI-powered Telegram bot with Langchain4j in Java","datePublished":"2024-04-10T13:49:31+00:00","dateModified":"2025-05-20T10:48:28+00:00","mainEntityOfPage":{"@id":"https:\/\/kindgeek.com\/blog\/post\/5-steps-to-develop-an-ai-powered-telegram-bot-with-langchain4j-in-java"},"wordCount":2346,"commentCount":0,"publisher":{"@id":"https:\/\/kindgeek.com\/blog\/#organization"},"image":{"@id":"https:\/\/kindgeek.com\/blog\/post\/5-steps-to-develop-an-ai-powered-telegram-bot-with-langchain4j-in-java#primaryimage"},"thumbnailUrl":"https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/LLM-3-e1712757163219.png","articleSection":["AI"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/kindgeek.com\/blog\/post\/5-steps-to-develop-an-ai-powered-telegram-bot-with-langchain4j-in-java","url":"https:\/\/kindgeek.com\/blog\/post\/5-steps-to-develop-an-ai-powered-telegram-bot-with-langchain4j-in-java","name":"5 steps to develop an AI-powered Telegram bot with Langchain4j in Java | Kindgeek","isPartOf":{"@id":"https:\/\/kindgeek.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/kindgeek.com\/blog\/post\/5-steps-to-develop-an-ai-powered-telegram-bot-with-langchain4j-in-java#primaryimage"},"image":{"@id":"https:\/\/kindgeek.com\/blog\/post\/5-steps-to-develop-an-ai-powered-telegram-bot-with-langchain4j-in-java#primaryimage"},"thumbnailUrl":"https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/LLM-3-e1712757163219.png","datePublished":"2024-04-10T13:49:31+00:00","dateModified":"2025-05-20T10:48:28+00:00","description":"A Comprehensive Guide to Building AI-Driven Telegram Bots with Langchain4j in Java","breadcrumb":{"@id":"https:\/\/kindgeek.com\/blog\/post\/5-steps-to-develop-an-ai-powered-telegram-bot-with-langchain4j-in-java#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/kindgeek.com\/blog\/post\/5-steps-to-develop-an-ai-powered-telegram-bot-with-langchain4j-in-java"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/kindgeek.com\/blog\/post\/5-steps-to-develop-an-ai-powered-telegram-bot-with-langchain4j-in-java#primaryimage","url":"https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/LLM-3-e1712757163219.png","contentUrl":"https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2024\/04\/LLM-3-e1712757163219.png","width":1105,"height":628},{"@type":"BreadcrumbList","@id":"https:\/\/kindgeek.com\/blog\/post\/5-steps-to-develop-an-ai-powered-telegram-bot-with-langchain4j-in-java#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/kindgeek.com\/blog"},{"@type":"ListItem","position":2,"name":"5 steps to develop an AI-powered Telegram bot with Langchain4j in Java"}]},{"@type":"WebSite","@id":"https:\/\/kindgeek.com\/blog\/#website","url":"https:\/\/kindgeek.com\/blog\/","name":"Kindgeek","description":"Blog | Kindgeek","publisher":{"@id":"https:\/\/kindgeek.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/kindgeek.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/kindgeek.com\/blog\/#organization","name":"Kindgeek","url":"https:\/\/kindgeek.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/kindgeek.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/kindgeek.com\/blog\/wp-content\/uploads\/2026\/02\/kg-logo-updated.png","contentUrl":"https:\/\/kindgeek.com\/blog\/wp-content\/uploads\/2026\/02\/kg-logo-updated.png","width":300,"height":60,"caption":"Kindgeek"},"image":{"@id":"https:\/\/kindgeek.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/kindgeek.com\/blog\/#\/schema\/person\/ac144d1174b0915c3f6ba63048221fc0","name":"kindgeek","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/kindgeek.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2020\/12\/favicon.png","contentUrl":"https:\/\/www.kindgeek.com\/blog\/wp-content\/uploads\/2020\/12\/favicon.png","caption":"kindgeek"},"sameAs":["https:\/\/kindgeek.com\/blog","https:\/\/www.facebook.com\/kindgeek","https:\/\/www.instagram.com\/kindgeeks","https:\/\/www.linkedin.com\/company\/kindgeek\/mycompany\/"],"url":"https:\/\/www.kindgeek.com\/blog\/post\/author\/kindgeek"}]}},"_links":{"self":[{"href":"https:\/\/www.kindgeek.com\/blog\/wp-json\/wp\/v2\/posts\/4823","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.kindgeek.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.kindgeek.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.kindgeek.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.kindgeek.com\/blog\/wp-json\/wp\/v2\/comments?post=4823"}],"version-history":[{"count":7,"href":"https:\/\/www.kindgeek.com\/blog\/wp-json\/wp\/v2\/posts\/4823\/revisions"}],"predecessor-version":[{"id":4837,"href":"https:\/\/www.kindgeek.com\/blog\/wp-json\/wp\/v2\/posts\/4823\/revisions\/4837"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.kindgeek.com\/blog\/wp-json\/wp\/v2\/media\/4829"}],"wp:attachment":[{"href":"https:\/\/www.kindgeek.com\/blog\/wp-json\/wp\/v2\/media?parent=4823"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.kindgeek.com\/blog\/wp-json\/wp\/v2\/categories?post=4823"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.kindgeek.com\/blog\/wp-json\/wp\/v2\/tags?post=4823"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}