Important: This documentation covers Yarn 1 (Classic).
For Yarn 2+ docs and migration guide, see yarnpkg.com.

Package detail

@langchain/core

langchain-ai7.2mMIT1.0.1TypeScript support: included

Core LangChain.js abstractions and schemas

llm, ai, gpt3, chain, prompt, prompt engineering, chatgpt, machine learning, ml, openai, embeddings, vectorstores

readme

🦜🍎️ @langchain/core

npm License: MIT Twitter

@langchain/core contains the core abstractions and schemas of LangChain.js, including base classes for language models, chat models, vectorstores, retrievers, and runnables.

💾 Quick Install

pnpm install @langchain/core

🤔 What is this?

@langchain/core contains the base abstractions that power the rest of the LangChain ecosystem. These abstractions are designed to be as modular and simple as possible. Examples of these abstractions include those for language models, document loaders, embedding models, vectorstores, retrievers, and more. The benefit of having these abstractions is that any provider can implement the required interface and then easily be used in the rest of the LangChain ecosystem.

For example, you can install other provider-specific packages like this:

pnpm install @langchain/openai

And use them as follows:

import { StringOutputParser } from "@langchain/core/output_parsers";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { ChatOpenAI } from "@langchain/openai";

const prompt = ChatPromptTemplate.fromTemplate(
  `Answer the following question to the best of your ability:\n{question}`
);

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0.8,
});

const outputParser = new StringOutputParser();

const chain = prompt.pipe(model).pipe(outputParser);

const stream = await chain.stream({
  question: "Why is the sky blue?",
});

for await (const chunk of stream) {
  console.log(chunk);
}

/*
The
 sky
 appears
 blue
 because
 of
 a
 phenomenon
 known
 as
 Ray
leigh
 scattering
*/

Note that for compatibility, all used LangChain packages (including the base LangChain package, which itself depends on core!) must share the same version of @langchain/core. This means that you may need to install/resolve a specific version of @langchain/core that matches the dependencies of your used packages.

📦 Creating your own package

Other LangChain packages should add this package as a dependency and extend the classes within. For an example, see the @langchain/anthropic in this repo.

Because all used packages must share the same version of core, packages should never directly depend on @langchain/core. Instead they should have core as a peer dependency and a dev dependency. We suggest using a tilde dependency to allow for different (backwards-compatible) patch versions:

{
  "name": "@langchain/anthropic",
  "version": "0.0.3",
  "description": "Anthropic integrations for LangChain.js",
  "type": "module",
  "author": "LangChain",
  "license": "MIT",
  "dependencies": {
    "@anthropic-ai/sdk": "^0.10.0"
  },
  "peerDependencies": {
    "@langchain/core": "~0.3.0"
  },
  "devDependencies": {
    "@langchain/core": "~0.3.0"
  }
}

We suggest making all packages cross-compatible with ESM and CJS using a build step like the one in @langchain/anthropic, then running pnpm build before running npm publish.

💁 Contributing

Because @langchain/core is a low-level package whose abstractions will change infrequently, most contributions should be made in the higher-level LangChain package.

Bugfixes or suggestions should be made using the same guidelines as the main package. See here for detailed information.

Please report any security issues or concerns following our security guidelines.

changelog

@langchain/core

1.0.0

🎉 LangChain v1.0 is here! This release provides a focused, production-ready foundation for building agents with significant improvements to the core abstractions and APIs. See the release notes for more details.

✨ Major Features

Standard content blocks

A new unified API for accessing modern LLM features across all providers:

  • New contentBlocks property: Provides provider-agnostic access to reasoning traces, citations, built-in tools (web search, code interpreters, etc.), and other advanced LLM features
  • Type-safe: Full TypeScript support with type hints for all content block types
  • Backward compatible: Content blocks can be loaded lazily with no breaking changes to existing code

Example:

const response = await model.invoke([
  { role: "user", content: "What is the weather in Tokyo?" },
]);

// Access structured content blocks
for (const block of response.contentBlocks) {
  if (block.type === "thinking") {
    console.log("Model reasoning:", block.thinking);
  } else if (block.type === "text") {
    console.log("Response:", block.text);
  }
}

For more information, see our guide on content blocks.

Enhanced Message API

Improvements to the core message types:

  • Structured content: Better support for multimodal content with the new content blocks API
  • Provider compatibility: Consistent message format across all LLM providers
  • Rich metadata: Enhanced metadata support for tracking message provenance and transformations

🔧 Improvements

  • Better structured output generation: Core abstractions for generating structured outputs in the main agent loop
  • Improved type safety: Enhanced TypeScript definitions across all core abstractions
  • Performance optimizations: Reduced overhead in message processing and runnable composition
  • Better error handling: More informative error messages and better error recovery

📦 Package Changes

The @langchain/core package remains focused on essential abstractions:

  • Core message types and content blocks
  • Base runnable abstractions
  • Tool definitions and schemas
  • Middleware infrastructure
  • Callback system
  • Output parsers
  • Prompt templates

🔄 Migration Notes

Backward Compatibility: This release maintains backward compatibility with existing code. Content blocks are loaded lazily, so no changes are required to existing applications.

New Features: To take advantage of new features like content blocks and middleware:

  1. Update to @langchain/core@next:

    npm install @langchain/core@1.0.0
  2. Use the new contentBlocks property to access rich content:

    const response = await model.invoke(messages);
    console.log(response.contentBlocks); // New API
    console.log(response.content); // Legacy API still works
  3. For middleware and createAgent, install langchain@next:

    npm install langchain@1.0.0 @langchain/core@1.0.0

📚 Additional Resources


0.3.78

Patch Changes

  • 1519a97: update chunk concat logic to match on missing ID fields
  • 079e11d: omit tool call chunks without tool call id

0.3.76

Patch Changes

  • 41bd944: support base64 embeddings format
  • e90bc0a: fix(core): prevent tool call chunks from merging incorrectly in AIMes…
  • 3a99a40: Fix deserialization of RemoveMessage if represented as a plain object
  • 58e9522: make mustache prompt with nested object working correctly
  • e44dc1b: handle backticks in structured output

0.3.75

Patch Changes

  • d6d841f: fix(core): Fix deep nesting of runnables within traceables

0.3.74

Patch Changes

  • 4e53005: fix(core): Always inherit parent run id onto callback manager from context

0.3.73

Patch Changes

  • a5a2e10: add root export to satisfy bundler requirements