Human In The Loop with LangChain and SQL tools
- Mohammed Vasim
- LLM, LangChain
- 22 Feb, 2026
Today we will implement Human in the loop concept using LangChain and SQL tools to get hands on how it works internally.
Human in the loop (HITL) is a concept interrupts agentic flow to execute sensitive tools where writing/deleting or accessing sensitive information involved.
This is more important as it could lead to exposing sensitive information or over writing or deleting some data which is much more necessary.
For example, if we consider tools for SQL operations. Such tools could read, write, manipulate or delete any data. Here, we could restrict agent tool execution and cross check is it writing a query for any other operation than just reading.
Same example could be applied when LLM or agenti is dealing with local files.
Here, we import, necessary modules. For this tutorial, we are going to use
- GPT OSS 120B, an open source model from Novita AI provider.
- LangChain and LangGraph for orchestration.
- SQLToolKit from LangChain community.
- Chinook.db, a sample SQLite DB for trial and error.
import os
from langchain.agents import create_agent
from langchain.chat_models import init_chat_model
from langchain.agents.middleware import HumanInTheLoopMiddleware
from langgraph.checkpoint.memory import InMemorySaver
Download Chinook DB from Google storage
This DB has few tables such as 'Album', 'Artist', 'Customer', 'Employee', 'Genre', 'Invoice', 'InvoiceLine', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track'.
import requests, pathlib
url = "https://storage.googleapis.com/benchmarks-artifacts/chinook/Chinook.db"
local_path = pathlib.Path("data/sql-agent/Chinook.db")
if local_path.exists():
# Delete if exist
local_path.unlink()
print(f"Deleting file: {local_path}")
response = requests.get(url)
if response.status_code == 200:
local_path.write_bytes(response.content)
print(f"File downloaded and saved as {local_path}")
else:
print(f"Failed to download the file. Status code: {response.status_code}")
from langchain_community.utilities import SQLDatabase
db = SQLDatabase.from_uri("sqlite:///data/sql-agent/Chinook.db")
print(f"Dialect: {db.dialect}")
print(f"Available tables: {db.get_usable_table_names()}")
print(f'Sample output: {db.run("SELECT * FROM Artist LIMIT 5;")}')
SQL Database Tool Kit
This tool kit has most common tools required for any DB operations such as:
-
sql_db_query: Input to this tool is a detailed and correct SQL query, output is a result from the database. If the query is not correct, an error message will be returned. If an error is returned, rewrite the query, check the query, and try again. If you encounter an issue with Unknown column 'xxxx' in 'field list', use sql_db_schema to query the correct table fields.
-
sql_db_schema: Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables. Be sure that the tables actually exist by calling sql_db_list_tables first! Example Input: table1, table2, table3
-
sql_db_list_tables: Input is an empty string, output is a comma-separated list of tables in the database.
-
sql_db_query_checker: Use this tool to double check if your query is correct before executing it. Always use this tool before executing a query with sql_db_query!
from langchain_community.agent_toolkits import SQLDatabaseToolkit
model = init_chat_model(
model="openai:openai/gpt-oss-120b",
api_key=os.getenv("NOVITA_API_KEY"),
base_url="https://api.novita.ai/openai"
)
toolkit = SQLDatabaseToolkit(db=db, llm=model)
tools = toolkit.get_tools()
for tool in tools:
print(f"{tool.name}: {tool.description}\n")
Define system prompt
system_prompt = """
You are an agent designed to interact with a SQL database.
Given an input question, create a syntactically correct {dialect} query to run,
then look at the results of the query and return the answer. Unless the user
specifies a specific number of examples they wish to obtain, always limit your
query to at most {top_k} results.
To start you should ALWAYS look at the tables in the database to see what you
can query. Do NOT skip this step.
Then you should query the schema of the most relevant tables.
""".format(
dialect=db.dialect,
top_k=5,
)
Create Agent with necessary tools and middlewares
-
create_agent from LangChain, it is a graph based agent, which is a prebuilt agent from LangChain for prototypes
-
GPT OSS 120B, an open source model from OpenAI
-
tools, DB tools
-
HumanInTheLoopMiddleware, a middleware enables human in the loop option in the workflow
- This will have few options such as tools required the interruptions
- Which interruptions
For example, we need to give a tool definition as key and allowance as values, allowance could be a dictionary or a string
- If you want to allow all operations, just specify tool_name: True,
- If you want to no approval needed, just specify tool_name: False
- If you want to restrict on-demand options such as approve or reject or edit,
- Specify a dictionary as: {"allowed_decisions": ["approve", "reject"]}
-
InMemorySaver to track each step taken by the agent.
agent = create_agent(
system_prompt=system_prompt,
model=init_chat_model(
model="openai:openai/gpt-oss-120b",
api_key=os.getenv("NOVITA_API_KEY"),
base_url="https://api.novita.ai/openai"
),
tools=tools,
middleware=[
HumanInTheLoopMiddleware(
interrupt_on={
# "write_file": True, # All decisions (approve, edit, reject) allowed
"sql_db_query": {"allowed_decisions": ["approve", "reject"]}, # No editing allowed
# Safe operation, no approval needed
# "read_data": False,
},
# Prefix for interrupt messages - combined with tool name and args to form the full message
# e.g., "Tool execution pending approval: execute_sql with query='DELETE FROM...'"
# Individual tools can override this by specifying a "description" in their interrupt config
description_prefix="Tool execution pending approval",
),
],
# Human-in-the-loop requires checkpointing to handle interrupts.
# In production, use a persistent checkpointer like AsyncPostgresSaver.
checkpointer=InMemorySaver(),
)
Utility to ask a query from agent
- This will have a configurable consisting thread-id indicating and tracking same session for every request.
- We will call agent invoke with a list of messages and passing runtime config for memory
- Currently, we are using updates stream_mode for simplicity
config = {"configurable": {"thread_id": "some_id"}}
def ask_agent(query: str):
"""Ask user utility function"""
# Human-in-the-loop leverages LangGraph's persistence layer.
# You must provide a thread ID to associate the execution with a conversation thread,
# so the conversation can be paused and resumed (as is needed for human review).
# Run the graph until the interrupt is hit.
result = agent.invoke(
{
"messages": [
{
"role": "user",
"content": query,
}
]
},
config=config
)
return result
result = ask_agent("Delete track table from db as it is old")
print(result.keys())
print(result["messages"][-1].content)
After asking to do some operations which require approval, we need to ensure the result should a have a key interrupt. If it does no have that key, it means, it is still asking verbally through conversation as you could see in above example.
- Agent may respond as table removed message, it means, it called a tool with drop query which is seeking approval of human and we know this by a key interrupt in result. As the following image:

- Agent may respond as Nothing if any tool error occurred
- Agent may respond as I am sorry, i could not help, (This will require improvements in prompt to avoid)
At this time, we don't need the following follow up. In case, agent needs confirmation before even tool calling, we need to go ahead with following follow up.
# result = ask_agent("Yes go ahead")
# print(result)
Interruption message
Let's see how an interruption message will look like:
# The interrupt contains the full HITL request with action_requests and review_configs
if "__interrupt__" in result:
print(result['__interrupt__'])
# > [
# > Interrupt(
# > value={
# > 'action_requests': [
# > {
# > 'name': 'execute_sql',
# > 'arguments': {'query': 'DELETE FROM records WHERE created_at < NOW() - INTERVAL \'30 days\';'},
# > 'description': 'Tool execution pending approval\n\nTool: execute_sql\nArgs: {...}'
# > }
# > ],
# > 'review_configs': [
# > {
# > 'action_name': 'execute_sql',
# > 'allowed_decisions': ['approve', 'reject']
# > }
# > ]
# > }
# > )
# > ]
from langgraph.types import Command
config = {"configurable": {"thread_id": "some_id"}}
# Resume with approval decision
agent.invoke(
Command(
resume={"decisions": [{"type": "approve"}]} # or "reject"
),
config=config # Same thread ID to resume the paused conversation
)
Database result
Before tool call

After tool call

Table Track has been removed successfully with approval.