Logo
Devtry
Published on

Build Simple LLM Application with Langchain

Authors

This is a lazy guide. In this article, we will create a simple LLM app with a chat model

Using Language Models

Install langchain

yarn add langchain @langchain/core

Install LLM

yarn add @langchain/google-genai

I'll use Gemini because it's free and a credit card isn't needed. You can also use other LLMs, such as OpenAI, Claude, DeepSeek, etc.

Create a .env file in the project's root folder and add your GOOGLE_API_KEY.

GOOGLE_API_KEY=your-api-key

Go to https://aistudio.google.com/apikey to get GOOGLE_API_KEY

Instantiate the model

import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { HumanMessage, SystemMessage } from "@langchain/core/messages";
import dotenv from "dotenv";

dotenv.config();

const model = new ChatGoogleGenerativeAI({
  model: "gemini-2.0-flash",
  temperature: 0,
  apiKey: process.env.GOOGLE_API_KEY,
});

const messages = [
  new SystemMessage("Translate the following from English into Vietnamese"),
  new HumanMessage("hi!"),
];

console.log(await model.invoke(messages));

We need to install dotenv to allow our application to read variables from the .env file. Now, let's start the application.

And here's the result: build-simple-simple-simple-1 As we can see, there's a bunch of data. Chat models receive message objects as input and generate message objects as output. In addition to text content, message objects convey conversational roles and hold important data, such as tool calls and token usage counts.

But don't worry about all of that, just look at the "content" line! Cool, right? =))

Now let's make the response display word by word instead of the entire response at once:

const stream = await model.stream(messages);

const chunks = [];
for await (const chunk of stream) {
  chunks.push(chunk);
  console.log(`${chunk.content}`);
}
Here's the result: build-simple-simple-simple-2

Conclusion

We've learned how to create your first simple LLM application. Have fun!