status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
⌀ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | langchain/tools/base.py | """Tool that takes in function or coroutine directly."""
description: str = ""
func: Callable[..., str]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[str]]] = None
"""The asynchronous version of the function."""
@validator("func", pre=True, always=True)
def validate_func_not_partial(cls, func: Callable) -> Callable:
"""Check that the function is not a partial."""
if isinstance(func, partial):
raise ValueError("Partial functions not yet supported in tools.")
return func
@property
def args(self) -> dict:
"""The tool's input arguments."""
if self.args_schema is not None:
return self.args_schema.schema()["properties"]
return {"tool_input": {"type": "string"}}
def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | langchain/tools/base.py | """Convert tool input to pydantic model."""
args, kwargs = super()._to_args_and_kwargs(tool_input)
all_args = list(args) + list(kwargs.values())
if len(all_args) != 1:
raise ValueError(
f"Too many arguments to single-input tool {self.name}."
f" Args: {all_args}"
)
return tuple(all_args), {}
def _run(
self,
*args: Any,
run_manager: Optional[CallbackManagerForToolRun] = None,
**kwargs: Any,
) -> Any:
"""Use the tool."""
new_argument_supported = signature(self.func).parameters.get("callbacks")
return (
self.func(
*args,
callbacks=run_manager.get_child() if run_manager else None,
**kwargs,
)
if new_argument_supported
else self.func(*args, **kwargs)
)
async def _arun( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | langchain/tools/base.py | self,
*args: Any,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
**kwargs: Any,
) -> Any:
"""Use the tool asynchronously."""
if self.coroutine:
new_argument_supported = signature(self.coroutine).parameters.get(
"callbacks"
)
return (
await self.coroutine(
*args,
callbacks=run_manager.get_child() if run_manager else None,
**kwargs,
)
if new_argument_supported
else await self.coroutine(*args, **kwargs)
)
raise NotImplementedError("Tool does not support async")
def __init__( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | langchain/tools/base.py | self, name: str, func: Callable, description: str, **kwargs: Any
) -> None:
"""Initialize tool."""
super(Tool, self).__init__(
name=name, func=func, description=description, **kwargs
)
@classmethod
def from_function(
cls,
func: Callable,
name: str,
description: str,
return_direct: bool = False,
args_schema: Optional[Type[BaseModel]] = None,
**kwargs: Any,
) -> Tool:
"""Initialize tool from a function."""
return cls(
name=name,
func=func,
description=description,
return_direct=return_direct,
args_schema=args_schema,
**kwargs,
)
class StructuredTool(BaseTool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | langchain/tools/base.py | """Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Callable[..., Any]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
@property
def args(self) -> dict:
"""The tool's input arguments."""
return self.args_schema.schema()["properties"]
def _run(
self,
*args: Any,
run_manager: Optional[CallbackManagerForToolRun] = None,
**kwargs: Any,
) -> Any:
"""Use the tool."""
new_argument_supported = signature(self.func).parameters.get("callbacks")
return (
self.func(
*args,
callbacks=run_manager.get_child() if run_manager else None,
**kwargs,
)
if new_argument_supported
else self.func(*args, **kwargs)
)
async def _arun( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | langchain/tools/base.py | self,
*args: Any,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
**kwargs: Any,
) -> str:
"""Use the tool asynchronously."""
if self.coroutine:
new_argument_supported = signature(self.coroutine).parameters.get(
"callbacks"
)
return (
await self.coroutine(
*args,
callbacks=run_manager.get_child() if run_manager else None,
**kwargs,
)
if new_argument_supported
else await self.coroutine(*args, **kwargs)
)
raise NotImplementedError("Tool does not support async")
@classmethod
def from_function( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | langchain/tools/base.py | cls,
func: Callable,
name: Optional[str] = None,
description: Optional[str] = None,
return_direct: bool = False,
args_schema: Optional[Type[BaseModel]] = None,
infer_schema: bool = True,
**kwargs: Any,
) -> StructuredTool:
name = name or func.__name__
description = description or func.__doc__
assert (
description is not None
), "Function must have a docstring if description not provided."
description = f"{name}{signature(func)} - {description.strip()}"
_args_schema = args_schema
if _args_schema is None and infer_schema:
_args_schema = create_schema_from_function(f"{name}Schema", func)
return cls(
name=name,
func=func,
args_schema=_args_schema,
description=description,
return_direct=return_direct,
**kwargs,
)
def tool( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | langchain/tools/base.py | *args: Union[str, Callable],
return_direct: bool = False,
args_schema: Optional[Type[BaseModel]] = None,
infer_schema: bool = True,
) -> Callable:
"""Make tools out of functions, can be used with or without arguments.
Args:
*args: The arguments to the tool.
return_direct: Whether to return directly from the tool rather
than continuing the agent loop.
args_schema: optional argument schema for user to specify
infer_schema: Whether to infer the schema of the arguments from
the function's signature. This also makes the resultant tool
accept a dictionary input to its `run()` function.
Requires:
- Function must be of type (str) -> str
- Function must have a docstring
Examples:
.. code-block:: python
@tool
def search_api(query: str) -> str:
# Searches the API for the query.
return
@tool("search", return_direct=True)
def search_api(query: str) -> str:
# Searches the API for the query.
return
"""
def _make_with_name(tool_name: str) -> Callable:
def _make_tool(func: Callable) -> BaseTool: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | langchain/tools/base.py | if infer_schema or args_schema is not None:
return StructuredTool.from_function(
func,
name=tool_name,
return_direct=return_direct,
args_schema=args_schema,
infer_schema=infer_schema,
)
assert func.__doc__ is not None, "Function must have a docstring"
return Tool(
name=tool_name,
func=func,
description=f"{tool_name} tool",
return_direct=return_direct,
)
return _make_tool
if len(args) == 1 and isinstance(args[0], str):
return _make_with_name(args[0])
elif len(args) == 1 and callable(args[0]):
return _make_with_name(args[0].__name__)(args[0])
elif len(args) == 0:
def _partial(func: Callable[[str], str]) -> BaseTool: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | langchain/tools/base.py | return _make_with_name(func.__name__)(func)
return _partial
else:
raise ValueError("Too many arguments for tool decorator") |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | tests/unit_tests/agents/test_tools.py | """Test tool utils."""
from datetime import datetime
from functools import partial
from typing import Any, Optional, Type, Union
from unittest.mock import MagicMock
import pydantic
import pytest
from pydantic import BaseModel
from langchain.agents.agent import Agent
from langchain.agents.chat.base import ChatAgent
from langchain.agents.conversational.base import ConversationalAgent
from langchain.agents.conversational_chat.base import ConversationalChatAgent
from langchain.agents.mrkl.base import ZeroShotAgent
from langchain.agents.react.base import ReActDocstoreAgent, ReActTextWorldAgent
from langchain.agents.self_ask_with_search.base import SelfAskWithSearchAgent
from langchain.agents.tools import Tool, tool
from langchain.tools.base import BaseTool, SchemaAnnotationError, StructuredTool
def test_unnamed_decorator() -> None:
"""Test functionality with unnamed decorator."""
@tool
def search_api(query: str) -> str:
"""Search the API for the query."""
return "API result"
assert isinstance(search_api, BaseTool)
assert search_api.name == "search_api"
assert not search_api.return_direct
assert search_api("test") == "API result"
class _MockSchema(BaseModel): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | tests/unit_tests/agents/test_tools.py | arg1: int
arg2: bool
arg3: Optional[dict] = None
class _MockStructuredTool(BaseTool):
name = "structured_api"
args_schema: Type[BaseModel] = _MockSchema
description = "A Structured Tool"
def _run(self, arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
return f"{arg1} {arg2} {arg3}"
async def _arun(self, arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
raise NotImplementedError
def test_structured_args() -> None:
"""Test functionality with structured arguments."""
structured_api = _MockStructuredTool()
assert isinstance(structured_api, BaseTool)
assert structured_api.name == "structured_api"
expected_result = "1 True {'foo': 'bar'}"
args = {"arg1": 1, "arg2": True, "arg3": {"foo": "bar"}}
assert structured_api.run(args) == expected_result
def test_unannotated_base_tool_raises_error() -> None:
"""Test that a BaseTool without type hints raises an exception.""" ""
with pytest.raises(SchemaAnnotationError):
class _UnAnnotatedTool(BaseTool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | tests/unit_tests/agents/test_tools.py | name = "structured_api"
args_schema = _MockSchema
description = "A Structured Tool"
def _run(self, arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
return f"{arg1} {arg2} {arg3}"
async def _arun(
self, arg1: int, arg2: bool, arg3: Optional[dict] = None
) -> str:
raise NotImplementedError
def test_misannotated_base_tool_raises_error() -> None:
"""Test that a BaseTool with the incorrrect typehint raises an exception.""" ""
with pytest.raises(SchemaAnnotationError):
class _MisAnnotatedTool(BaseTool):
name = "structured_api"
args_schema: BaseModel = _MockSchema
description = "A Structured Tool"
def _run(self, arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
return f"{arg1} {arg2} {arg3}"
async def _arun(
self, arg1: int, arg2: bool, arg3: Optional[dict] = None
) -> str:
raise NotImplementedError
def test_forward_ref_annotated_base_tool_accepted() -> None:
"""Test that a using forward ref annotation syntax is accepted.""" ""
class _ForwardRefAnnotatedTool(BaseTool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | tests/unit_tests/agents/test_tools.py | name = "structured_api"
args_schema: "Type[BaseModel]" = _MockSchema
description = "A Structured Tool"
def _run(self, arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
return f"{arg1} {arg2} {arg3}"
async def _arun(
self, arg1: int, arg2: bool, arg3: Optional[dict] = None
) -> str:
raise NotImplementedError
def test_subclass_annotated_base_tool_accepted() -> None:
"""Test BaseTool child w/ custom schema isn't overwritten."""
class _ForwardRefAnnotatedTool(BaseTool):
name = "structured_api"
args_schema: Type[_MockSchema] = _MockSchema
description = "A Structured Tool"
def _run(self, arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
return f"{arg1} {arg2} {arg3}"
async def _arun( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | tests/unit_tests/agents/test_tools.py | self, arg1: int, arg2: bool, arg3: Optional[dict] = None
) -> str:
raise NotImplementedError
assert issubclass(_ForwardRefAnnotatedTool, BaseTool)
tool = _ForwardRefAnnotatedTool()
assert tool.args_schema == _MockSchema
def test_decorator_with_specified_schema() -> None:
"""Test that manually specified schemata are passed through to the tool."""
@tool(args_schema=_MockSchema)
def tool_func(arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
"""Return the arguments directly."""
return f"{arg1} {arg2} {arg3}"
assert isinstance(tool_func, BaseTool)
assert tool_func.args_schema == _MockSchema
def test_decorated_function_schema_equivalent() -> None:
"""Test that a BaseTool without a schema meets expectations."""
@tool
def structured_tool_input(
arg1: int, arg2: bool, arg3: Optional[dict] = None
) -> str:
"""Return the arguments directly."""
return f"{arg1} {arg2} {arg3}"
assert isinstance(structured_tool_input, BaseTool)
assert structured_tool_input.args_schema is not None
assert (
structured_tool_input.args_schema.schema()["properties"]
== _MockSchema.schema()["properties"]
== structured_tool_input.args
)
def test_structured_args_decorator_no_infer_schema() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | tests/unit_tests/agents/test_tools.py | """Test functionality with structured arguments parsed as a decorator."""
@tool(infer_schema=False)
def structured_tool_input(
arg1: int, arg2: Union[float, datetime], opt_arg: Optional[dict] = None
) -> str:
"""Return the arguments directly."""
return f"{arg1}, {arg2}, {opt_arg}"
assert isinstance(structured_tool_input, BaseTool)
assert structured_tool_input.name == "structured_tool_input"
args = {"arg1": 1, "arg2": 0.001, "opt_arg": {"foo": "bar"}}
expected_result = "1, 0.001, {'foo': 'bar'}"
with pytest.raises(ValueError):
assert structured_tool_input.run(args) == expected_result
def test_structured_single_str_decorator_no_infer_schema() -> None:
"""Test functionality with structured arguments parsed as a decorator."""
@tool(infer_schema=False)
def unstructured_tool_input(tool_input: str) -> str:
"""Return the arguments directly."""
return f"{tool_input}"
assert isinstance(unstructured_tool_input, BaseTool)
assert unstructured_tool_input.args_schema is None
assert unstructured_tool_input.run("foo") == "foo"
def test_base_tool_inheritance_base_schema() -> None:
"""Test schema is correctly inferred when inheriting from BaseTool."""
class _MockSimpleTool(BaseTool):
name = "simple_tool"
description = "A Simple Tool"
def _run(self, tool_input: str) -> str:
return f"{tool_input}"
async def _arun(self, tool_input: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | tests/unit_tests/agents/test_tools.py | raise NotImplementedError
simple_tool = _MockSimpleTool()
assert simple_tool.args_schema is None
expected_args = {"tool_input": {"title": "Tool Input", "type": "string"}}
assert simple_tool.args == expected_args
def test_tool_lambda_args_schema() -> None:
"""Test args schema inference when the tool argument is a lambda function."""
tool = Tool(
name="tool",
description="A tool",
func=lambda tool_input: tool_input,
)
assert tool.args_schema is None
expected_args = {"tool_input": {"type": "string"}}
assert tool.args == expected_args
def test_structured_tool_lambda_multi_args_schema() -> None:
"""Test args schema inference when the tool argument is a lambda function."""
tool = StructuredTool.from_function(
name="tool",
description="A tool",
func=lambda tool_input, other_arg: f"{tool_input}{other_arg}",
)
assert tool.args_schema is not None
expected_args = {
"tool_input": {"title": "Tool Input"},
"other_arg": {"title": "Other Arg"},
}
assert tool.args == expected_args
def test_tool_partial_function_args_schema() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | tests/unit_tests/agents/test_tools.py | """Test args schema inference when the tool argument is a partial function."""
def func(tool_input: str, other_arg: str) -> str:
return tool_input + other_arg
with pytest.raises(pydantic.error_wrappers.ValidationError):
Tool(
name="tool",
description="A tool",
func=partial(func, other_arg="foo"),
)
def test_empty_args_decorator() -> None:
"""Test inferred schema of decorated fn with no args."""
@tool
def empty_tool_input() -> str:
"""Return a constant."""
return "the empty result"
assert isinstance(empty_tool_input, BaseTool)
assert empty_tool_input.name == "empty_tool_input"
assert empty_tool_input.args == {}
assert empty_tool_input.run({}) == "the empty result"
def test_named_tool_decorator() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | tests/unit_tests/agents/test_tools.py | """Test functionality when arguments are provided as input to decorator."""
@tool("search")
def search_api(query: str) -> str:
"""Search the API for the query."""
return "API result"
assert isinstance(search_api, BaseTool)
assert search_api.name == "search"
assert not search_api.return_direct
def test_named_tool_decorator_return_direct() -> None:
"""Test functionality when arguments and return direct are provided as input."""
@tool("search", return_direct=True)
def search_api(query: str) -> str:
"""Search the API for the query."""
return "API result"
assert isinstance(search_api, BaseTool)
assert search_api.name == "search"
assert search_api.return_direct
def test_unnamed_tool_decorator_return_direct() -> None:
"""Test functionality when only return direct is provided."""
@tool(return_direct=True)
def search_api(query: str) -> str:
"""Search the API for the query."""
return "API result"
assert isinstance(search_api, BaseTool)
assert search_api.name == "search_api"
assert search_api.return_direct
def test_tool_with_kwargs() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | tests/unit_tests/agents/test_tools.py | """Test functionality when only return direct is provided."""
@tool(return_direct=True)
def search_api(
arg_0: str,
arg_1: float = 4.3,
ping: str = "hi",
) -> str:
"""Search the API for the query."""
return f"arg_0={arg_0}, arg_1={arg_1}, ping={ping}"
assert isinstance(search_api, BaseTool)
result = search_api.run(
tool_input={
"arg_0": "foo",
"arg_1": 3.2,
"ping": "pong",
}
)
assert result == "arg_0=foo, arg_1=3.2, ping=pong"
result = search_api.run(
tool_input={
"arg_0": "foo",
}
)
assert result == "arg_0=foo, arg_1=4.3, ping=hi"
result = search_api.run("foobar")
assert result == "arg_0=foobar, arg_1=4.3, ping=hi"
def test_missing_docstring() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | tests/unit_tests/agents/test_tools.py | """Test error is raised when docstring is missing."""
with pytest.raises(AssertionError, match="Function must have a docstring"):
@tool
def search_api(query: str) -> str:
return "API result"
def test_create_tool_positional_args() -> None:
"""Test that positional arguments are allowed."""
test_tool = Tool("test_name", lambda x: x, "test_description")
assert test_tool("foo") == "foo"
assert test_tool.name == "test_name"
assert test_tool.description == "test_description"
assert test_tool.is_single_input
def test_create_tool_keyword_args() -> None:
"""Test that keyword arguments are allowed."""
test_tool = Tool(name="test_name", func=lambda x: x, description="test_description")
assert test_tool.is_single_input
assert test_tool("foo") == "foo"
assert test_tool.name == "test_name"
assert test_tool.description == "test_description"
@pytest.mark.asyncio
async def test_create_async_tool() -> None:
"""Test that async tools are allowed."""
async def _test_func(x: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | tests/unit_tests/agents/test_tools.py | return x
test_tool = Tool(
name="test_name",
func=lambda x: x,
description="test_description",
coroutine=_test_func,
)
assert test_tool.is_single_input
assert test_tool("foo") == "foo"
assert test_tool.name == "test_name"
assert test_tool.description == "test_description"
assert test_tool.coroutine is not None
assert await test_tool.arun("foo") == "foo"
@pytest.mark.parametrize(
"agent_cls",
[
ZeroShotAgent,
ChatAgent,
ConversationalChatAgent,
ConversationalAgent,
ReActDocstoreAgent,
ReActTextWorldAgent,
SelfAskWithSearchAgent,
],
)
def test_single_input_agent_raises_error_on_structured_tool( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,053 | Tools with partials (Partial functions not yet supported in tools) | We commonly used this pattern to create tools:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=partial(foo, "bar"),
name = "foo",
description="foobar"
)
```
which as of 0.0.148 (I think) gives a pydantic error "Partial functions not yet supported in tools." We must use instead this format:
```py
from langchain.tools import Tool
from functools import partial
def foo(x, y):
return y
Tool.from_function(
func=lambda y: foo(x="bar",y=y),
name = "foo",
description="foobar"
)
```
It would be nice to again support partials. | https://github.com/langchain-ai/langchain/issues/4053 | https://github.com/langchain-ai/langchain/pull/4058 | 7e967aa4d581bec8b29e9ea44267505b0bad18b9 | afa9d1292b0a152e36d338dde7b02f0b93bd37d9 | "2023-05-03T17:28:46Z" | python | "2023-05-03T20:16:41Z" | tests/unit_tests/agents/test_tools.py | agent_cls: Type[Agent],
) -> None:
"""Test that older agents raise errors on older tools."""
@tool
def the_tool(foo: str, bar: str) -> str:
"""Return the concat of foo and bar."""
return foo + bar
with pytest.raises(
ValueError,
match=f"{agent_cls.__name__} does not support"
f" multi-input tool {the_tool.name}.",
):
agent_cls.from_llm_and_tools(MagicMock(), [the_tool])
def test_tool_no_args_specified_assumes_str() -> None:
"""Older tools could assume *args and **kwargs were passed in."""
def ambiguous_function(*args: Any, **kwargs: Any) -> str:
"""An ambiguously defined function."""
return args[0]
some_tool = Tool(
name="chain_run",
description="Run the chain",
func=ambiguous_function,
)
expected_args = {"tool_input": {"type": "string"}}
assert some_tool.args == expected_args
assert some_tool.run("foobar") == "foobar"
assert some_tool.run({"tool_input": "foobar"}) == "foobar"
with pytest.raises(ValueError, match="Too many arguments to single-input tool"):
some_tool.run({"tool_input": "foobar", "other_input": "bar"}) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,142 | ImportError: cannot import name 'CursorResult' from 'sqlalchemy' | ### System Info
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
aiohttp 3.8.3 py310h5eee18b_0
aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge
async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge
attrs 23.1.0 pyh71513ae_0 conda-forge
blas 1.0 mkl
brotlipy 0.7.0 py310h5764c6d_1004 conda-forge
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.01.10 h06a4308_0
certifi 2022.12.7 py310h06a4308_0
cffi 1.15.0 py310h0fdd8cc_0 conda-forge
charset-normalizer 2.0.4 pyhd3eb1b0_0
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
cryptography 3.4.8 py310h685ca39_1 conda-forge
dataclasses-json 0.5.7 pyhd8ed1ab_0 conda-forge
frozenlist 1.3.3 py310h5eee18b_0
greenlet 2.0.1 py310h6a678d5_0
idna 3.4 pyhd8ed1ab_0 conda-forge
intel-openmp 2021.4.0 h06a4308_3561
langchain 0.0.158 pyhd8ed1ab_0 conda-forge
ld_impl_linux-64 2.38 h1181459_1
libffi 3.4.2 h6a678d5_6
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libstdcxx-ng 11.2.0 h1234567_1
libuuid 1.41.5 h5eee18b_0
marshmallow 3.19.0 pyhd8ed1ab_0 conda-forge
marshmallow-enum 1.5.1 pyh9f0ad1d_3 conda-forge
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py310ha2c4b55_0 conda-forge
mkl_fft 1.3.1 py310hd6ae3a3_0
mkl_random 1.2.2 py310h00e6091_0
multidict 6.0.2 py310h5eee18b_0
mypy_extensions 1.0.0 pyha770c72_0 conda-forge
ncurses 6.4 h6a678d5_0
numexpr 2.8.4 py310h8879344_0
numpy 1.24.3 py310hd5efca6_0
numpy-base 1.24.3 py310h8e6c178_0
openapi-schema-pydantic 1.2.4 pyhd8ed1ab_0 conda-forge
openssl 1.1.1t h7f8727e_0
packaging 23.1 pyhd8ed1ab_0 conda-forge
pip 22.2.2 pypi_0 pypi
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pydantic 1.10.2 py310h5eee18b_0
pyopenssl 20.0.1 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.10.9 h7a1cb2a_2
python_abi 3.10 2_cp310 conda-forge
pyyaml 6.0 py310h5764c6d_4 conda-forge
readline 8.2 h5eee18b_0
requests 2.29.0 pyhd8ed1ab_0 conda-forge
setuptools 66.0.0 py310h06a4308_0
six 1.16.0 pyh6c4a22f_0 conda-forge
sqlalchemy 1.4.39 py310h5eee18b_0
sqlite 3.41.2 h5eee18b_0
stringcase 1.2.0 py_0 conda-forge
tenacity 8.2.2 pyhd8ed1ab_0 conda-forge
tk 8.6.12 h1ccaba5_0
tqdm 4.65.0 pyhd8ed1ab_1 conda-forge
typing-extensions 4.5.0 hd8ed1ab_0 conda-forge
typing_extensions 4.5.0 pyha770c72_0 conda-forge
typing_inspect 0.8.0 pyhd8ed1ab_0 conda-forge
tzdata 2023c h04d1e81_0
urllib3 1.26.15 pyhd8ed1ab_0 conda-forge
wheel 0.38.4 py310h06a4308_0
xz 5.4.2 h5eee18b_0
yaml 0.2.5 h7f98852_2 conda-forge
yarl 1.7.2 py310h5764c6d_2 conda-forge
zlib 1.2.13 h5eee18b_0
Traceback (most recent call last):
File "/home/bachar/projects/op-stack/./app.py", line 1, in <module>
from langchain.document_loaders import DirectoryLoader
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.tools import InvalidTool
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/tools/__init__.py", line 32, in <module>
from langchain.tools.vectorstore.tool import (
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/tools/vectorstore/tool.py", line 13, in <module>
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/__init__.py", line 19, in <module>
from langchain.chains.loading import load_chain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/loading.py", line 24, in <module>
from langchain.chains.sql_database.base import SQLDatabaseChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/sql_database/base.py", line 15, in <module>
from langchain.sql_database import SQLDatabase
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/sql_database.py", line 8, in <module>
from sqlalchemy import (
ImportError: cannot import name 'CursorResult' from 'sqlalchemy' (/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/sqlalchemy/__init__.py)
(/home/bachar/projects/op-stack/venv)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import DirectoryLoader
docs = DirectoryLoader("./pdfs", "**/*.pdf").load()
### Expected behavior
no errors should be thrown | https://github.com/langchain-ai/langchain/issues/4142 | https://github.com/langchain-ai/langchain/pull/4145 | 2f087d63af45a172fc363b3370e49141bd663ed2 | fea639c1fc1ac324f1300016a02b6d30a2f8d249 | "2023-05-05T00:47:24Z" | python | "2023-05-05T03:46:38Z" | langchain/sql_database.py | """SQLAlchemy wrapper around a database."""
from __future__ import annotations
import warnings
from typing import Any, Iterable, List, Optional
import sqlalchemy
from sqlalchemy import (
CursorResult,
MetaData,
Table,
create_engine,
inspect,
select,
text,
)
from sqlalchemy.engine import Engine
from sqlalchemy.exc import ProgrammingError, SQLAlchemyError
from sqlalchemy.schema import CreateTable
def _format_index(index: sqlalchemy.engine.interfaces.ReflectedIndex) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,142 | ImportError: cannot import name 'CursorResult' from 'sqlalchemy' | ### System Info
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
aiohttp 3.8.3 py310h5eee18b_0
aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge
async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge
attrs 23.1.0 pyh71513ae_0 conda-forge
blas 1.0 mkl
brotlipy 0.7.0 py310h5764c6d_1004 conda-forge
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.01.10 h06a4308_0
certifi 2022.12.7 py310h06a4308_0
cffi 1.15.0 py310h0fdd8cc_0 conda-forge
charset-normalizer 2.0.4 pyhd3eb1b0_0
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
cryptography 3.4.8 py310h685ca39_1 conda-forge
dataclasses-json 0.5.7 pyhd8ed1ab_0 conda-forge
frozenlist 1.3.3 py310h5eee18b_0
greenlet 2.0.1 py310h6a678d5_0
idna 3.4 pyhd8ed1ab_0 conda-forge
intel-openmp 2021.4.0 h06a4308_3561
langchain 0.0.158 pyhd8ed1ab_0 conda-forge
ld_impl_linux-64 2.38 h1181459_1
libffi 3.4.2 h6a678d5_6
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libstdcxx-ng 11.2.0 h1234567_1
libuuid 1.41.5 h5eee18b_0
marshmallow 3.19.0 pyhd8ed1ab_0 conda-forge
marshmallow-enum 1.5.1 pyh9f0ad1d_3 conda-forge
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py310ha2c4b55_0 conda-forge
mkl_fft 1.3.1 py310hd6ae3a3_0
mkl_random 1.2.2 py310h00e6091_0
multidict 6.0.2 py310h5eee18b_0
mypy_extensions 1.0.0 pyha770c72_0 conda-forge
ncurses 6.4 h6a678d5_0
numexpr 2.8.4 py310h8879344_0
numpy 1.24.3 py310hd5efca6_0
numpy-base 1.24.3 py310h8e6c178_0
openapi-schema-pydantic 1.2.4 pyhd8ed1ab_0 conda-forge
openssl 1.1.1t h7f8727e_0
packaging 23.1 pyhd8ed1ab_0 conda-forge
pip 22.2.2 pypi_0 pypi
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pydantic 1.10.2 py310h5eee18b_0
pyopenssl 20.0.1 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.10.9 h7a1cb2a_2
python_abi 3.10 2_cp310 conda-forge
pyyaml 6.0 py310h5764c6d_4 conda-forge
readline 8.2 h5eee18b_0
requests 2.29.0 pyhd8ed1ab_0 conda-forge
setuptools 66.0.0 py310h06a4308_0
six 1.16.0 pyh6c4a22f_0 conda-forge
sqlalchemy 1.4.39 py310h5eee18b_0
sqlite 3.41.2 h5eee18b_0
stringcase 1.2.0 py_0 conda-forge
tenacity 8.2.2 pyhd8ed1ab_0 conda-forge
tk 8.6.12 h1ccaba5_0
tqdm 4.65.0 pyhd8ed1ab_1 conda-forge
typing-extensions 4.5.0 hd8ed1ab_0 conda-forge
typing_extensions 4.5.0 pyha770c72_0 conda-forge
typing_inspect 0.8.0 pyhd8ed1ab_0 conda-forge
tzdata 2023c h04d1e81_0
urllib3 1.26.15 pyhd8ed1ab_0 conda-forge
wheel 0.38.4 py310h06a4308_0
xz 5.4.2 h5eee18b_0
yaml 0.2.5 h7f98852_2 conda-forge
yarl 1.7.2 py310h5764c6d_2 conda-forge
zlib 1.2.13 h5eee18b_0
Traceback (most recent call last):
File "/home/bachar/projects/op-stack/./app.py", line 1, in <module>
from langchain.document_loaders import DirectoryLoader
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.tools import InvalidTool
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/tools/__init__.py", line 32, in <module>
from langchain.tools.vectorstore.tool import (
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/tools/vectorstore/tool.py", line 13, in <module>
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/__init__.py", line 19, in <module>
from langchain.chains.loading import load_chain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/loading.py", line 24, in <module>
from langchain.chains.sql_database.base import SQLDatabaseChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/sql_database/base.py", line 15, in <module>
from langchain.sql_database import SQLDatabase
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/sql_database.py", line 8, in <module>
from sqlalchemy import (
ImportError: cannot import name 'CursorResult' from 'sqlalchemy' (/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/sqlalchemy/__init__.py)
(/home/bachar/projects/op-stack/venv)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import DirectoryLoader
docs = DirectoryLoader("./pdfs", "**/*.pdf").load()
### Expected behavior
no errors should be thrown | https://github.com/langchain-ai/langchain/issues/4142 | https://github.com/langchain-ai/langchain/pull/4145 | 2f087d63af45a172fc363b3370e49141bd663ed2 | fea639c1fc1ac324f1300016a02b6d30a2f8d249 | "2023-05-05T00:47:24Z" | python | "2023-05-05T03:46:38Z" | langchain/sql_database.py | return (
f'Name: {index["name"]}, Unique: {index["unique"]},'
f' Columns: {str(index["column_names"])}'
)
class SQLDatabase:
"""SQLAlchemy wrapper around a database."""
def __init__(
self,
engine: Engine,
schema: Optional[str] = None,
metadata: Optional[MetaData] = None,
ignore_tables: Optional[List[str]] = None,
include_tables: Optional[List[str]] = None,
sample_rows_in_table_info: int = 3,
indexes_in_table_info: bool = False,
custom_table_info: Optional[dict] = None,
view_support: bool = False,
):
"""Create engine from database URI."""
self._engine = engine
self._schema = schema |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,142 | ImportError: cannot import name 'CursorResult' from 'sqlalchemy' | ### System Info
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
aiohttp 3.8.3 py310h5eee18b_0
aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge
async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge
attrs 23.1.0 pyh71513ae_0 conda-forge
blas 1.0 mkl
brotlipy 0.7.0 py310h5764c6d_1004 conda-forge
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.01.10 h06a4308_0
certifi 2022.12.7 py310h06a4308_0
cffi 1.15.0 py310h0fdd8cc_0 conda-forge
charset-normalizer 2.0.4 pyhd3eb1b0_0
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
cryptography 3.4.8 py310h685ca39_1 conda-forge
dataclasses-json 0.5.7 pyhd8ed1ab_0 conda-forge
frozenlist 1.3.3 py310h5eee18b_0
greenlet 2.0.1 py310h6a678d5_0
idna 3.4 pyhd8ed1ab_0 conda-forge
intel-openmp 2021.4.0 h06a4308_3561
langchain 0.0.158 pyhd8ed1ab_0 conda-forge
ld_impl_linux-64 2.38 h1181459_1
libffi 3.4.2 h6a678d5_6
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libstdcxx-ng 11.2.0 h1234567_1
libuuid 1.41.5 h5eee18b_0
marshmallow 3.19.0 pyhd8ed1ab_0 conda-forge
marshmallow-enum 1.5.1 pyh9f0ad1d_3 conda-forge
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py310ha2c4b55_0 conda-forge
mkl_fft 1.3.1 py310hd6ae3a3_0
mkl_random 1.2.2 py310h00e6091_0
multidict 6.0.2 py310h5eee18b_0
mypy_extensions 1.0.0 pyha770c72_0 conda-forge
ncurses 6.4 h6a678d5_0
numexpr 2.8.4 py310h8879344_0
numpy 1.24.3 py310hd5efca6_0
numpy-base 1.24.3 py310h8e6c178_0
openapi-schema-pydantic 1.2.4 pyhd8ed1ab_0 conda-forge
openssl 1.1.1t h7f8727e_0
packaging 23.1 pyhd8ed1ab_0 conda-forge
pip 22.2.2 pypi_0 pypi
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pydantic 1.10.2 py310h5eee18b_0
pyopenssl 20.0.1 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.10.9 h7a1cb2a_2
python_abi 3.10 2_cp310 conda-forge
pyyaml 6.0 py310h5764c6d_4 conda-forge
readline 8.2 h5eee18b_0
requests 2.29.0 pyhd8ed1ab_0 conda-forge
setuptools 66.0.0 py310h06a4308_0
six 1.16.0 pyh6c4a22f_0 conda-forge
sqlalchemy 1.4.39 py310h5eee18b_0
sqlite 3.41.2 h5eee18b_0
stringcase 1.2.0 py_0 conda-forge
tenacity 8.2.2 pyhd8ed1ab_0 conda-forge
tk 8.6.12 h1ccaba5_0
tqdm 4.65.0 pyhd8ed1ab_1 conda-forge
typing-extensions 4.5.0 hd8ed1ab_0 conda-forge
typing_extensions 4.5.0 pyha770c72_0 conda-forge
typing_inspect 0.8.0 pyhd8ed1ab_0 conda-forge
tzdata 2023c h04d1e81_0
urllib3 1.26.15 pyhd8ed1ab_0 conda-forge
wheel 0.38.4 py310h06a4308_0
xz 5.4.2 h5eee18b_0
yaml 0.2.5 h7f98852_2 conda-forge
yarl 1.7.2 py310h5764c6d_2 conda-forge
zlib 1.2.13 h5eee18b_0
Traceback (most recent call last):
File "/home/bachar/projects/op-stack/./app.py", line 1, in <module>
from langchain.document_loaders import DirectoryLoader
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.tools import InvalidTool
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/tools/__init__.py", line 32, in <module>
from langchain.tools.vectorstore.tool import (
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/tools/vectorstore/tool.py", line 13, in <module>
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/__init__.py", line 19, in <module>
from langchain.chains.loading import load_chain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/loading.py", line 24, in <module>
from langchain.chains.sql_database.base import SQLDatabaseChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/sql_database/base.py", line 15, in <module>
from langchain.sql_database import SQLDatabase
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/sql_database.py", line 8, in <module>
from sqlalchemy import (
ImportError: cannot import name 'CursorResult' from 'sqlalchemy' (/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/sqlalchemy/__init__.py)
(/home/bachar/projects/op-stack/venv)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import DirectoryLoader
docs = DirectoryLoader("./pdfs", "**/*.pdf").load()
### Expected behavior
no errors should be thrown | https://github.com/langchain-ai/langchain/issues/4142 | https://github.com/langchain-ai/langchain/pull/4145 | 2f087d63af45a172fc363b3370e49141bd663ed2 | fea639c1fc1ac324f1300016a02b6d30a2f8d249 | "2023-05-05T00:47:24Z" | python | "2023-05-05T03:46:38Z" | langchain/sql_database.py | if include_tables and ignore_tables:
raise ValueError("Cannot specify both include_tables and ignore_tables")
self._inspector = inspect(self._engine)
self._all_tables = set(
self._inspector.get_table_names(schema=schema)
+ (self._inspector.get_view_names(schema=schema) if view_support else [])
)
self._include_tables = set(include_tables) if include_tables else set()
if self._include_tables:
missing_tables = self._include_tables - self._all_tables
if missing_tables:
raise ValueError(
f"include_tables {missing_tables} not found in database"
)
self._ignore_tables = set(ignore_tables) if ignore_tables else set()
if self._ignore_tables:
missing_tables = self._ignore_tables - self._all_tables
if missing_tables:
raise ValueError(
f"ignore_tables {missing_tables} not found in database"
)
usable_tables = self.get_usable_table_names()
self._usable_tables = set(usable_tables) if usable_tables else self._all_tables
if not isinstance(sample_rows_in_table_info, int):
raise TypeError("sample_rows_in_table_info must be an integer")
self._sample_rows_in_table_info = sample_rows_in_table_info
self._indexes_in_table_info = indexes_in_table_info
self._custom_table_info = custom_table_info |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,142 | ImportError: cannot import name 'CursorResult' from 'sqlalchemy' | ### System Info
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
aiohttp 3.8.3 py310h5eee18b_0
aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge
async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge
attrs 23.1.0 pyh71513ae_0 conda-forge
blas 1.0 mkl
brotlipy 0.7.0 py310h5764c6d_1004 conda-forge
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.01.10 h06a4308_0
certifi 2022.12.7 py310h06a4308_0
cffi 1.15.0 py310h0fdd8cc_0 conda-forge
charset-normalizer 2.0.4 pyhd3eb1b0_0
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
cryptography 3.4.8 py310h685ca39_1 conda-forge
dataclasses-json 0.5.7 pyhd8ed1ab_0 conda-forge
frozenlist 1.3.3 py310h5eee18b_0
greenlet 2.0.1 py310h6a678d5_0
idna 3.4 pyhd8ed1ab_0 conda-forge
intel-openmp 2021.4.0 h06a4308_3561
langchain 0.0.158 pyhd8ed1ab_0 conda-forge
ld_impl_linux-64 2.38 h1181459_1
libffi 3.4.2 h6a678d5_6
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libstdcxx-ng 11.2.0 h1234567_1
libuuid 1.41.5 h5eee18b_0
marshmallow 3.19.0 pyhd8ed1ab_0 conda-forge
marshmallow-enum 1.5.1 pyh9f0ad1d_3 conda-forge
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py310ha2c4b55_0 conda-forge
mkl_fft 1.3.1 py310hd6ae3a3_0
mkl_random 1.2.2 py310h00e6091_0
multidict 6.0.2 py310h5eee18b_0
mypy_extensions 1.0.0 pyha770c72_0 conda-forge
ncurses 6.4 h6a678d5_0
numexpr 2.8.4 py310h8879344_0
numpy 1.24.3 py310hd5efca6_0
numpy-base 1.24.3 py310h8e6c178_0
openapi-schema-pydantic 1.2.4 pyhd8ed1ab_0 conda-forge
openssl 1.1.1t h7f8727e_0
packaging 23.1 pyhd8ed1ab_0 conda-forge
pip 22.2.2 pypi_0 pypi
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pydantic 1.10.2 py310h5eee18b_0
pyopenssl 20.0.1 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.10.9 h7a1cb2a_2
python_abi 3.10 2_cp310 conda-forge
pyyaml 6.0 py310h5764c6d_4 conda-forge
readline 8.2 h5eee18b_0
requests 2.29.0 pyhd8ed1ab_0 conda-forge
setuptools 66.0.0 py310h06a4308_0
six 1.16.0 pyh6c4a22f_0 conda-forge
sqlalchemy 1.4.39 py310h5eee18b_0
sqlite 3.41.2 h5eee18b_0
stringcase 1.2.0 py_0 conda-forge
tenacity 8.2.2 pyhd8ed1ab_0 conda-forge
tk 8.6.12 h1ccaba5_0
tqdm 4.65.0 pyhd8ed1ab_1 conda-forge
typing-extensions 4.5.0 hd8ed1ab_0 conda-forge
typing_extensions 4.5.0 pyha770c72_0 conda-forge
typing_inspect 0.8.0 pyhd8ed1ab_0 conda-forge
tzdata 2023c h04d1e81_0
urllib3 1.26.15 pyhd8ed1ab_0 conda-forge
wheel 0.38.4 py310h06a4308_0
xz 5.4.2 h5eee18b_0
yaml 0.2.5 h7f98852_2 conda-forge
yarl 1.7.2 py310h5764c6d_2 conda-forge
zlib 1.2.13 h5eee18b_0
Traceback (most recent call last):
File "/home/bachar/projects/op-stack/./app.py", line 1, in <module>
from langchain.document_loaders import DirectoryLoader
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.tools import InvalidTool
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/tools/__init__.py", line 32, in <module>
from langchain.tools.vectorstore.tool import (
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/tools/vectorstore/tool.py", line 13, in <module>
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/__init__.py", line 19, in <module>
from langchain.chains.loading import load_chain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/loading.py", line 24, in <module>
from langchain.chains.sql_database.base import SQLDatabaseChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/sql_database/base.py", line 15, in <module>
from langchain.sql_database import SQLDatabase
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/sql_database.py", line 8, in <module>
from sqlalchemy import (
ImportError: cannot import name 'CursorResult' from 'sqlalchemy' (/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/sqlalchemy/__init__.py)
(/home/bachar/projects/op-stack/venv)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import DirectoryLoader
docs = DirectoryLoader("./pdfs", "**/*.pdf").load()
### Expected behavior
no errors should be thrown | https://github.com/langchain-ai/langchain/issues/4142 | https://github.com/langchain-ai/langchain/pull/4145 | 2f087d63af45a172fc363b3370e49141bd663ed2 | fea639c1fc1ac324f1300016a02b6d30a2f8d249 | "2023-05-05T00:47:24Z" | python | "2023-05-05T03:46:38Z" | langchain/sql_database.py | if self._custom_table_info:
if not isinstance(self._custom_table_info, dict):
raise TypeError(
"table_info must be a dictionary with table names as keys and the "
"desired table info as values"
)
intersection = set(self._custom_table_info).intersection(self._all_tables)
self._custom_table_info = dict(
(table, self._custom_table_info[table])
for table in self._custom_table_info
if table in intersection
)
self._metadata = metadata or MetaData()
self._metadata.reflect(
views=view_support,
bind=self._engine,
only=list(self._usable_tables),
schema=self._schema,
)
@classmethod
def from_uri(
cls, database_uri: str, engine_args: Optional[dict] = None, **kwargs: Any
) -> SQLDatabase:
"""Construct a SQLAlchemy engine from URI."""
_engine_args = engine_args or {}
return cls(create_engine(database_uri, **_engine_args), **kwargs)
@property
def dialect(self) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,142 | ImportError: cannot import name 'CursorResult' from 'sqlalchemy' | ### System Info
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
aiohttp 3.8.3 py310h5eee18b_0
aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge
async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge
attrs 23.1.0 pyh71513ae_0 conda-forge
blas 1.0 mkl
brotlipy 0.7.0 py310h5764c6d_1004 conda-forge
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.01.10 h06a4308_0
certifi 2022.12.7 py310h06a4308_0
cffi 1.15.0 py310h0fdd8cc_0 conda-forge
charset-normalizer 2.0.4 pyhd3eb1b0_0
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
cryptography 3.4.8 py310h685ca39_1 conda-forge
dataclasses-json 0.5.7 pyhd8ed1ab_0 conda-forge
frozenlist 1.3.3 py310h5eee18b_0
greenlet 2.0.1 py310h6a678d5_0
idna 3.4 pyhd8ed1ab_0 conda-forge
intel-openmp 2021.4.0 h06a4308_3561
langchain 0.0.158 pyhd8ed1ab_0 conda-forge
ld_impl_linux-64 2.38 h1181459_1
libffi 3.4.2 h6a678d5_6
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libstdcxx-ng 11.2.0 h1234567_1
libuuid 1.41.5 h5eee18b_0
marshmallow 3.19.0 pyhd8ed1ab_0 conda-forge
marshmallow-enum 1.5.1 pyh9f0ad1d_3 conda-forge
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py310ha2c4b55_0 conda-forge
mkl_fft 1.3.1 py310hd6ae3a3_0
mkl_random 1.2.2 py310h00e6091_0
multidict 6.0.2 py310h5eee18b_0
mypy_extensions 1.0.0 pyha770c72_0 conda-forge
ncurses 6.4 h6a678d5_0
numexpr 2.8.4 py310h8879344_0
numpy 1.24.3 py310hd5efca6_0
numpy-base 1.24.3 py310h8e6c178_0
openapi-schema-pydantic 1.2.4 pyhd8ed1ab_0 conda-forge
openssl 1.1.1t h7f8727e_0
packaging 23.1 pyhd8ed1ab_0 conda-forge
pip 22.2.2 pypi_0 pypi
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pydantic 1.10.2 py310h5eee18b_0
pyopenssl 20.0.1 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.10.9 h7a1cb2a_2
python_abi 3.10 2_cp310 conda-forge
pyyaml 6.0 py310h5764c6d_4 conda-forge
readline 8.2 h5eee18b_0
requests 2.29.0 pyhd8ed1ab_0 conda-forge
setuptools 66.0.0 py310h06a4308_0
six 1.16.0 pyh6c4a22f_0 conda-forge
sqlalchemy 1.4.39 py310h5eee18b_0
sqlite 3.41.2 h5eee18b_0
stringcase 1.2.0 py_0 conda-forge
tenacity 8.2.2 pyhd8ed1ab_0 conda-forge
tk 8.6.12 h1ccaba5_0
tqdm 4.65.0 pyhd8ed1ab_1 conda-forge
typing-extensions 4.5.0 hd8ed1ab_0 conda-forge
typing_extensions 4.5.0 pyha770c72_0 conda-forge
typing_inspect 0.8.0 pyhd8ed1ab_0 conda-forge
tzdata 2023c h04d1e81_0
urllib3 1.26.15 pyhd8ed1ab_0 conda-forge
wheel 0.38.4 py310h06a4308_0
xz 5.4.2 h5eee18b_0
yaml 0.2.5 h7f98852_2 conda-forge
yarl 1.7.2 py310h5764c6d_2 conda-forge
zlib 1.2.13 h5eee18b_0
Traceback (most recent call last):
File "/home/bachar/projects/op-stack/./app.py", line 1, in <module>
from langchain.document_loaders import DirectoryLoader
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.tools import InvalidTool
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/tools/__init__.py", line 32, in <module>
from langchain.tools.vectorstore.tool import (
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/tools/vectorstore/tool.py", line 13, in <module>
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/__init__.py", line 19, in <module>
from langchain.chains.loading import load_chain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/loading.py", line 24, in <module>
from langchain.chains.sql_database.base import SQLDatabaseChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/sql_database/base.py", line 15, in <module>
from langchain.sql_database import SQLDatabase
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/sql_database.py", line 8, in <module>
from sqlalchemy import (
ImportError: cannot import name 'CursorResult' from 'sqlalchemy' (/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/sqlalchemy/__init__.py)
(/home/bachar/projects/op-stack/venv)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import DirectoryLoader
docs = DirectoryLoader("./pdfs", "**/*.pdf").load()
### Expected behavior
no errors should be thrown | https://github.com/langchain-ai/langchain/issues/4142 | https://github.com/langchain-ai/langchain/pull/4145 | 2f087d63af45a172fc363b3370e49141bd663ed2 | fea639c1fc1ac324f1300016a02b6d30a2f8d249 | "2023-05-05T00:47:24Z" | python | "2023-05-05T03:46:38Z" | langchain/sql_database.py | """Return string representation of dialect to use."""
return self._engine.dialect.name
def get_usable_table_names(self) -> Iterable[str]:
"""Get names of tables available."""
if self._include_tables:
return self._include_tables
return self._all_tables - self._ignore_tables
def get_table_names(self) -> Iterable[str]:
"""Get names of tables available."""
warnings.warn(
"This method is deprecated - please use `get_usable_table_names`."
)
return self.get_usable_table_names()
@property
def table_info(self) -> str:
"""Information about all tables in the database."""
return self.get_table_info()
def get_table_info(self, table_names: Optional[List[str]] = None) -> str:
"""Get information about specified tables.
Follows best practices as specified in: Rajkumar et al, 2022
(https://arxiv.org/abs/2204.00498)
If `sample_rows_in_table_info`, the specified number of sample rows will be
appended to each table description. This can increase performance as
demonstrated in the paper.
"""
all_table_names = self.get_usable_table_names()
if table_names is not None:
missing_tables = set(table_names).difference(all_table_names)
if missing_tables:
raise ValueError(f"table_names {missing_tables} not found in database") |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,142 | ImportError: cannot import name 'CursorResult' from 'sqlalchemy' | ### System Info
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
aiohttp 3.8.3 py310h5eee18b_0
aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge
async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge
attrs 23.1.0 pyh71513ae_0 conda-forge
blas 1.0 mkl
brotlipy 0.7.0 py310h5764c6d_1004 conda-forge
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.01.10 h06a4308_0
certifi 2022.12.7 py310h06a4308_0
cffi 1.15.0 py310h0fdd8cc_0 conda-forge
charset-normalizer 2.0.4 pyhd3eb1b0_0
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
cryptography 3.4.8 py310h685ca39_1 conda-forge
dataclasses-json 0.5.7 pyhd8ed1ab_0 conda-forge
frozenlist 1.3.3 py310h5eee18b_0
greenlet 2.0.1 py310h6a678d5_0
idna 3.4 pyhd8ed1ab_0 conda-forge
intel-openmp 2021.4.0 h06a4308_3561
langchain 0.0.158 pyhd8ed1ab_0 conda-forge
ld_impl_linux-64 2.38 h1181459_1
libffi 3.4.2 h6a678d5_6
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libstdcxx-ng 11.2.0 h1234567_1
libuuid 1.41.5 h5eee18b_0
marshmallow 3.19.0 pyhd8ed1ab_0 conda-forge
marshmallow-enum 1.5.1 pyh9f0ad1d_3 conda-forge
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py310ha2c4b55_0 conda-forge
mkl_fft 1.3.1 py310hd6ae3a3_0
mkl_random 1.2.2 py310h00e6091_0
multidict 6.0.2 py310h5eee18b_0
mypy_extensions 1.0.0 pyha770c72_0 conda-forge
ncurses 6.4 h6a678d5_0
numexpr 2.8.4 py310h8879344_0
numpy 1.24.3 py310hd5efca6_0
numpy-base 1.24.3 py310h8e6c178_0
openapi-schema-pydantic 1.2.4 pyhd8ed1ab_0 conda-forge
openssl 1.1.1t h7f8727e_0
packaging 23.1 pyhd8ed1ab_0 conda-forge
pip 22.2.2 pypi_0 pypi
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pydantic 1.10.2 py310h5eee18b_0
pyopenssl 20.0.1 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.10.9 h7a1cb2a_2
python_abi 3.10 2_cp310 conda-forge
pyyaml 6.0 py310h5764c6d_4 conda-forge
readline 8.2 h5eee18b_0
requests 2.29.0 pyhd8ed1ab_0 conda-forge
setuptools 66.0.0 py310h06a4308_0
six 1.16.0 pyh6c4a22f_0 conda-forge
sqlalchemy 1.4.39 py310h5eee18b_0
sqlite 3.41.2 h5eee18b_0
stringcase 1.2.0 py_0 conda-forge
tenacity 8.2.2 pyhd8ed1ab_0 conda-forge
tk 8.6.12 h1ccaba5_0
tqdm 4.65.0 pyhd8ed1ab_1 conda-forge
typing-extensions 4.5.0 hd8ed1ab_0 conda-forge
typing_extensions 4.5.0 pyha770c72_0 conda-forge
typing_inspect 0.8.0 pyhd8ed1ab_0 conda-forge
tzdata 2023c h04d1e81_0
urllib3 1.26.15 pyhd8ed1ab_0 conda-forge
wheel 0.38.4 py310h06a4308_0
xz 5.4.2 h5eee18b_0
yaml 0.2.5 h7f98852_2 conda-forge
yarl 1.7.2 py310h5764c6d_2 conda-forge
zlib 1.2.13 h5eee18b_0
Traceback (most recent call last):
File "/home/bachar/projects/op-stack/./app.py", line 1, in <module>
from langchain.document_loaders import DirectoryLoader
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.tools import InvalidTool
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/tools/__init__.py", line 32, in <module>
from langchain.tools.vectorstore.tool import (
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/tools/vectorstore/tool.py", line 13, in <module>
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/__init__.py", line 19, in <module>
from langchain.chains.loading import load_chain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/loading.py", line 24, in <module>
from langchain.chains.sql_database.base import SQLDatabaseChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/sql_database/base.py", line 15, in <module>
from langchain.sql_database import SQLDatabase
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/sql_database.py", line 8, in <module>
from sqlalchemy import (
ImportError: cannot import name 'CursorResult' from 'sqlalchemy' (/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/sqlalchemy/__init__.py)
(/home/bachar/projects/op-stack/venv)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import DirectoryLoader
docs = DirectoryLoader("./pdfs", "**/*.pdf").load()
### Expected behavior
no errors should be thrown | https://github.com/langchain-ai/langchain/issues/4142 | https://github.com/langchain-ai/langchain/pull/4145 | 2f087d63af45a172fc363b3370e49141bd663ed2 | fea639c1fc1ac324f1300016a02b6d30a2f8d249 | "2023-05-05T00:47:24Z" | python | "2023-05-05T03:46:38Z" | langchain/sql_database.py | all_table_names = table_names
meta_tables = [
tbl
for tbl in self._metadata.sorted_tables
if tbl.name in set(all_table_names)
and not (self.dialect == "sqlite" and tbl.name.startswith("sqlite_"))
]
tables = []
for table in meta_tables:
if self._custom_table_info and table.name in self._custom_table_info:
tables.append(self._custom_table_info[table.name])
continue
create_table = str(CreateTable(table).compile(self._engine))
table_info = f"{create_table.rstrip()}"
has_extra_info = (
self._indexes_in_table_info or self._sample_rows_in_table_info
)
if has_extra_info:
table_info += "\n\n/*"
if self._indexes_in_table_info:
table_info += f"\n{self._get_table_indexes(table)}\n"
if self._sample_rows_in_table_info:
table_info += f"\n{self._get_sample_rows(table)}\n"
if has_extra_info:
table_info += "*/"
tables.append(table_info)
final_str = "\n\n".join(tables)
return final_str
def _get_table_indexes(self, table: Table) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,142 | ImportError: cannot import name 'CursorResult' from 'sqlalchemy' | ### System Info
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
aiohttp 3.8.3 py310h5eee18b_0
aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge
async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge
attrs 23.1.0 pyh71513ae_0 conda-forge
blas 1.0 mkl
brotlipy 0.7.0 py310h5764c6d_1004 conda-forge
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.01.10 h06a4308_0
certifi 2022.12.7 py310h06a4308_0
cffi 1.15.0 py310h0fdd8cc_0 conda-forge
charset-normalizer 2.0.4 pyhd3eb1b0_0
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
cryptography 3.4.8 py310h685ca39_1 conda-forge
dataclasses-json 0.5.7 pyhd8ed1ab_0 conda-forge
frozenlist 1.3.3 py310h5eee18b_0
greenlet 2.0.1 py310h6a678d5_0
idna 3.4 pyhd8ed1ab_0 conda-forge
intel-openmp 2021.4.0 h06a4308_3561
langchain 0.0.158 pyhd8ed1ab_0 conda-forge
ld_impl_linux-64 2.38 h1181459_1
libffi 3.4.2 h6a678d5_6
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libstdcxx-ng 11.2.0 h1234567_1
libuuid 1.41.5 h5eee18b_0
marshmallow 3.19.0 pyhd8ed1ab_0 conda-forge
marshmallow-enum 1.5.1 pyh9f0ad1d_3 conda-forge
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py310ha2c4b55_0 conda-forge
mkl_fft 1.3.1 py310hd6ae3a3_0
mkl_random 1.2.2 py310h00e6091_0
multidict 6.0.2 py310h5eee18b_0
mypy_extensions 1.0.0 pyha770c72_0 conda-forge
ncurses 6.4 h6a678d5_0
numexpr 2.8.4 py310h8879344_0
numpy 1.24.3 py310hd5efca6_0
numpy-base 1.24.3 py310h8e6c178_0
openapi-schema-pydantic 1.2.4 pyhd8ed1ab_0 conda-forge
openssl 1.1.1t h7f8727e_0
packaging 23.1 pyhd8ed1ab_0 conda-forge
pip 22.2.2 pypi_0 pypi
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pydantic 1.10.2 py310h5eee18b_0
pyopenssl 20.0.1 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.10.9 h7a1cb2a_2
python_abi 3.10 2_cp310 conda-forge
pyyaml 6.0 py310h5764c6d_4 conda-forge
readline 8.2 h5eee18b_0
requests 2.29.0 pyhd8ed1ab_0 conda-forge
setuptools 66.0.0 py310h06a4308_0
six 1.16.0 pyh6c4a22f_0 conda-forge
sqlalchemy 1.4.39 py310h5eee18b_0
sqlite 3.41.2 h5eee18b_0
stringcase 1.2.0 py_0 conda-forge
tenacity 8.2.2 pyhd8ed1ab_0 conda-forge
tk 8.6.12 h1ccaba5_0
tqdm 4.65.0 pyhd8ed1ab_1 conda-forge
typing-extensions 4.5.0 hd8ed1ab_0 conda-forge
typing_extensions 4.5.0 pyha770c72_0 conda-forge
typing_inspect 0.8.0 pyhd8ed1ab_0 conda-forge
tzdata 2023c h04d1e81_0
urllib3 1.26.15 pyhd8ed1ab_0 conda-forge
wheel 0.38.4 py310h06a4308_0
xz 5.4.2 h5eee18b_0
yaml 0.2.5 h7f98852_2 conda-forge
yarl 1.7.2 py310h5764c6d_2 conda-forge
zlib 1.2.13 h5eee18b_0
Traceback (most recent call last):
File "/home/bachar/projects/op-stack/./app.py", line 1, in <module>
from langchain.document_loaders import DirectoryLoader
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.tools import InvalidTool
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/tools/__init__.py", line 32, in <module>
from langchain.tools.vectorstore.tool import (
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/tools/vectorstore/tool.py", line 13, in <module>
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/__init__.py", line 19, in <module>
from langchain.chains.loading import load_chain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/loading.py", line 24, in <module>
from langchain.chains.sql_database.base import SQLDatabaseChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/sql_database/base.py", line 15, in <module>
from langchain.sql_database import SQLDatabase
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/sql_database.py", line 8, in <module>
from sqlalchemy import (
ImportError: cannot import name 'CursorResult' from 'sqlalchemy' (/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/sqlalchemy/__init__.py)
(/home/bachar/projects/op-stack/venv)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import DirectoryLoader
docs = DirectoryLoader("./pdfs", "**/*.pdf").load()
### Expected behavior
no errors should be thrown | https://github.com/langchain-ai/langchain/issues/4142 | https://github.com/langchain-ai/langchain/pull/4145 | 2f087d63af45a172fc363b3370e49141bd663ed2 | fea639c1fc1ac324f1300016a02b6d30a2f8d249 | "2023-05-05T00:47:24Z" | python | "2023-05-05T03:46:38Z" | langchain/sql_database.py | indexes = self._inspector.get_indexes(table.name)
indexes_formatted = "\n".join(map(_format_index, indexes))
return f"Table Indexes:\n{indexes_formatted}"
def _get_sample_rows(self, table: Table) -> str:
command = select(table).limit(self._sample_rows_in_table_info)
columns_str = "\t".join([col.name for col in table.columns])
try:
with self._engine.connect() as connection:
sample_rows_result: CursorResult = connection.execute(command)
sample_rows = list(
map(lambda ls: [str(i)[:100] for i in ls], sample_rows_result)
)
sample_rows_str = "\n".join(["\t".join(row) for row in sample_rows])
except ProgrammingError:
sample_rows_str = ""
return (
f"{self._sample_rows_in_table_info} rows from {table.name} table:\n"
f"{columns_str}\n"
f"{sample_rows_str}"
)
def run(self, command: str, fetch: str = "all") -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,142 | ImportError: cannot import name 'CursorResult' from 'sqlalchemy' | ### System Info
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
aiohttp 3.8.3 py310h5eee18b_0
aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge
async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge
attrs 23.1.0 pyh71513ae_0 conda-forge
blas 1.0 mkl
brotlipy 0.7.0 py310h5764c6d_1004 conda-forge
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.01.10 h06a4308_0
certifi 2022.12.7 py310h06a4308_0
cffi 1.15.0 py310h0fdd8cc_0 conda-forge
charset-normalizer 2.0.4 pyhd3eb1b0_0
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
cryptography 3.4.8 py310h685ca39_1 conda-forge
dataclasses-json 0.5.7 pyhd8ed1ab_0 conda-forge
frozenlist 1.3.3 py310h5eee18b_0
greenlet 2.0.1 py310h6a678d5_0
idna 3.4 pyhd8ed1ab_0 conda-forge
intel-openmp 2021.4.0 h06a4308_3561
langchain 0.0.158 pyhd8ed1ab_0 conda-forge
ld_impl_linux-64 2.38 h1181459_1
libffi 3.4.2 h6a678d5_6
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libstdcxx-ng 11.2.0 h1234567_1
libuuid 1.41.5 h5eee18b_0
marshmallow 3.19.0 pyhd8ed1ab_0 conda-forge
marshmallow-enum 1.5.1 pyh9f0ad1d_3 conda-forge
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py310ha2c4b55_0 conda-forge
mkl_fft 1.3.1 py310hd6ae3a3_0
mkl_random 1.2.2 py310h00e6091_0
multidict 6.0.2 py310h5eee18b_0
mypy_extensions 1.0.0 pyha770c72_0 conda-forge
ncurses 6.4 h6a678d5_0
numexpr 2.8.4 py310h8879344_0
numpy 1.24.3 py310hd5efca6_0
numpy-base 1.24.3 py310h8e6c178_0
openapi-schema-pydantic 1.2.4 pyhd8ed1ab_0 conda-forge
openssl 1.1.1t h7f8727e_0
packaging 23.1 pyhd8ed1ab_0 conda-forge
pip 22.2.2 pypi_0 pypi
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pydantic 1.10.2 py310h5eee18b_0
pyopenssl 20.0.1 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.10.9 h7a1cb2a_2
python_abi 3.10 2_cp310 conda-forge
pyyaml 6.0 py310h5764c6d_4 conda-forge
readline 8.2 h5eee18b_0
requests 2.29.0 pyhd8ed1ab_0 conda-forge
setuptools 66.0.0 py310h06a4308_0
six 1.16.0 pyh6c4a22f_0 conda-forge
sqlalchemy 1.4.39 py310h5eee18b_0
sqlite 3.41.2 h5eee18b_0
stringcase 1.2.0 py_0 conda-forge
tenacity 8.2.2 pyhd8ed1ab_0 conda-forge
tk 8.6.12 h1ccaba5_0
tqdm 4.65.0 pyhd8ed1ab_1 conda-forge
typing-extensions 4.5.0 hd8ed1ab_0 conda-forge
typing_extensions 4.5.0 pyha770c72_0 conda-forge
typing_inspect 0.8.0 pyhd8ed1ab_0 conda-forge
tzdata 2023c h04d1e81_0
urllib3 1.26.15 pyhd8ed1ab_0 conda-forge
wheel 0.38.4 py310h06a4308_0
xz 5.4.2 h5eee18b_0
yaml 0.2.5 h7f98852_2 conda-forge
yarl 1.7.2 py310h5764c6d_2 conda-forge
zlib 1.2.13 h5eee18b_0
Traceback (most recent call last):
File "/home/bachar/projects/op-stack/./app.py", line 1, in <module>
from langchain.document_loaders import DirectoryLoader
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.tools import InvalidTool
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/tools/__init__.py", line 32, in <module>
from langchain.tools.vectorstore.tool import (
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/tools/vectorstore/tool.py", line 13, in <module>
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/__init__.py", line 19, in <module>
from langchain.chains.loading import load_chain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/loading.py", line 24, in <module>
from langchain.chains.sql_database.base import SQLDatabaseChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/sql_database/base.py", line 15, in <module>
from langchain.sql_database import SQLDatabase
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/sql_database.py", line 8, in <module>
from sqlalchemy import (
ImportError: cannot import name 'CursorResult' from 'sqlalchemy' (/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/sqlalchemy/__init__.py)
(/home/bachar/projects/op-stack/venv)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import DirectoryLoader
docs = DirectoryLoader("./pdfs", "**/*.pdf").load()
### Expected behavior
no errors should be thrown | https://github.com/langchain-ai/langchain/issues/4142 | https://github.com/langchain-ai/langchain/pull/4145 | 2f087d63af45a172fc363b3370e49141bd663ed2 | fea639c1fc1ac324f1300016a02b6d30a2f8d249 | "2023-05-05T00:47:24Z" | python | "2023-05-05T03:46:38Z" | langchain/sql_database.py | """Execute a SQL command and return a string representing the results.
If the statement returns rows, a string of the results is returned.
If the statement returns no rows, an empty string is returned.
"""
with self._engine.begin() as connection:
if self._schema is not None:
connection.exec_driver_sql(f"SET search_path TO {self._schema}")
cursor = connection.execute(text(command))
if cursor.returns_rows:
if fetch == "all":
result = cursor.fetchall()
elif fetch == "one":
result = cursor.fetchone()[0]
else:
raise ValueError("Fetch parameter must be either 'one' or 'all'")
return str(result)
return ""
def get_table_info_no_throw(self, table_names: Optional[List[str]] = None) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,142 | ImportError: cannot import name 'CursorResult' from 'sqlalchemy' | ### System Info
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
aiohttp 3.8.3 py310h5eee18b_0
aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge
async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge
attrs 23.1.0 pyh71513ae_0 conda-forge
blas 1.0 mkl
brotlipy 0.7.0 py310h5764c6d_1004 conda-forge
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.01.10 h06a4308_0
certifi 2022.12.7 py310h06a4308_0
cffi 1.15.0 py310h0fdd8cc_0 conda-forge
charset-normalizer 2.0.4 pyhd3eb1b0_0
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
cryptography 3.4.8 py310h685ca39_1 conda-forge
dataclasses-json 0.5.7 pyhd8ed1ab_0 conda-forge
frozenlist 1.3.3 py310h5eee18b_0
greenlet 2.0.1 py310h6a678d5_0
idna 3.4 pyhd8ed1ab_0 conda-forge
intel-openmp 2021.4.0 h06a4308_3561
langchain 0.0.158 pyhd8ed1ab_0 conda-forge
ld_impl_linux-64 2.38 h1181459_1
libffi 3.4.2 h6a678d5_6
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libstdcxx-ng 11.2.0 h1234567_1
libuuid 1.41.5 h5eee18b_0
marshmallow 3.19.0 pyhd8ed1ab_0 conda-forge
marshmallow-enum 1.5.1 pyh9f0ad1d_3 conda-forge
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py310ha2c4b55_0 conda-forge
mkl_fft 1.3.1 py310hd6ae3a3_0
mkl_random 1.2.2 py310h00e6091_0
multidict 6.0.2 py310h5eee18b_0
mypy_extensions 1.0.0 pyha770c72_0 conda-forge
ncurses 6.4 h6a678d5_0
numexpr 2.8.4 py310h8879344_0
numpy 1.24.3 py310hd5efca6_0
numpy-base 1.24.3 py310h8e6c178_0
openapi-schema-pydantic 1.2.4 pyhd8ed1ab_0 conda-forge
openssl 1.1.1t h7f8727e_0
packaging 23.1 pyhd8ed1ab_0 conda-forge
pip 22.2.2 pypi_0 pypi
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pydantic 1.10.2 py310h5eee18b_0
pyopenssl 20.0.1 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.10.9 h7a1cb2a_2
python_abi 3.10 2_cp310 conda-forge
pyyaml 6.0 py310h5764c6d_4 conda-forge
readline 8.2 h5eee18b_0
requests 2.29.0 pyhd8ed1ab_0 conda-forge
setuptools 66.0.0 py310h06a4308_0
six 1.16.0 pyh6c4a22f_0 conda-forge
sqlalchemy 1.4.39 py310h5eee18b_0
sqlite 3.41.2 h5eee18b_0
stringcase 1.2.0 py_0 conda-forge
tenacity 8.2.2 pyhd8ed1ab_0 conda-forge
tk 8.6.12 h1ccaba5_0
tqdm 4.65.0 pyhd8ed1ab_1 conda-forge
typing-extensions 4.5.0 hd8ed1ab_0 conda-forge
typing_extensions 4.5.0 pyha770c72_0 conda-forge
typing_inspect 0.8.0 pyhd8ed1ab_0 conda-forge
tzdata 2023c h04d1e81_0
urllib3 1.26.15 pyhd8ed1ab_0 conda-forge
wheel 0.38.4 py310h06a4308_0
xz 5.4.2 h5eee18b_0
yaml 0.2.5 h7f98852_2 conda-forge
yarl 1.7.2 py310h5764c6d_2 conda-forge
zlib 1.2.13 h5eee18b_0
Traceback (most recent call last):
File "/home/bachar/projects/op-stack/./app.py", line 1, in <module>
from langchain.document_loaders import DirectoryLoader
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.tools import InvalidTool
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/tools/__init__.py", line 32, in <module>
from langchain.tools.vectorstore.tool import (
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/tools/vectorstore/tool.py", line 13, in <module>
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/__init__.py", line 19, in <module>
from langchain.chains.loading import load_chain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/loading.py", line 24, in <module>
from langchain.chains.sql_database.base import SQLDatabaseChain
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/chains/sql_database/base.py", line 15, in <module>
from langchain.sql_database import SQLDatabase
File "/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/langchain/sql_database.py", line 8, in <module>
from sqlalchemy import (
ImportError: cannot import name 'CursorResult' from 'sqlalchemy' (/home/bachar/projects/op-stack/venv/lib/python3.10/site-packages/sqlalchemy/__init__.py)
(/home/bachar/projects/op-stack/venv)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import DirectoryLoader
docs = DirectoryLoader("./pdfs", "**/*.pdf").load()
### Expected behavior
no errors should be thrown | https://github.com/langchain-ai/langchain/issues/4142 | https://github.com/langchain-ai/langchain/pull/4145 | 2f087d63af45a172fc363b3370e49141bd663ed2 | fea639c1fc1ac324f1300016a02b6d30a2f8d249 | "2023-05-05T00:47:24Z" | python | "2023-05-05T03:46:38Z" | langchain/sql_database.py | """Get information about specified tables.
Follows best practices as specified in: Rajkumar et al, 2022
(https://arxiv.org/abs/2204.00498)
If `sample_rows_in_table_info`, the specified number of sample rows will be
appended to each table description. This can increase performance as
demonstrated in the paper.
"""
try:
return self.get_table_info(table_names)
except ValueError as e:
"""Format the error message"""
return f"Error: {e}"
def run_no_throw(self, command: str, fetch: str = "all") -> str:
"""Execute a SQL command and return a string representing the results.
If the statement returns rows, a string of the results is returned.
If the statement returns no rows, an empty string is returned.
If the statement throws an error, the error message is returned.
"""
try:
return self.run(command, fetch)
except SQLAlchemyError as e:
"""Format the error message"""
return f"Error: {e}" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,129 | Bug introduced in 0.0.158 | Updates in version 0.0.158 have introduced a bug that prevents this import from being successful, while it works in 0.0.157
```
Traceback (most recent call last):
File "path", line 5, in <module>
from langchain.chains import OpenAIModerationChain, SequentialChain, ConversationChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.tools import InvalidTool
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/tools/__init__.py", line 32, in <module>
from langchain.tools.vectorstore.tool import (
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/tools/vectorstore/tool.py", line 13, in <module>
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/__init__.py", line 19, in <module>
from langchain.chains.loading import load_chain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/loading.py", line 24, in <module>
from langchain.chains.sql_database.base import SQLDatabaseChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/sql_database/base.py", line 15, in <module>
from langchain.sql_database import SQLDatabase
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/sql_database.py", line 8, in <module>
from sqlalchemy import (
ImportError: cannot import name 'CursorResult' from 'sqlalchemy' (/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/sqlalchemy/__init__.py)
``` | https://github.com/langchain-ai/langchain/issues/4129 | https://github.com/langchain-ai/langchain/pull/4145 | 2f087d63af45a172fc363b3370e49141bd663ed2 | fea639c1fc1ac324f1300016a02b6d30a2f8d249 | "2023-05-04T19:24:15Z" | python | "2023-05-05T03:46:38Z" | langchain/sql_database.py | """SQLAlchemy wrapper around a database."""
from __future__ import annotations
import warnings
from typing import Any, Iterable, List, Optional
import sqlalchemy
from sqlalchemy import (
CursorResult,
MetaData,
Table,
create_engine,
inspect,
select,
text,
)
from sqlalchemy.engine import Engine
from sqlalchemy.exc import ProgrammingError, SQLAlchemyError
from sqlalchemy.schema import CreateTable
def _format_index(index: sqlalchemy.engine.interfaces.ReflectedIndex) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,129 | Bug introduced in 0.0.158 | Updates in version 0.0.158 have introduced a bug that prevents this import from being successful, while it works in 0.0.157
```
Traceback (most recent call last):
File "path", line 5, in <module>
from langchain.chains import OpenAIModerationChain, SequentialChain, ConversationChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.tools import InvalidTool
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/tools/__init__.py", line 32, in <module>
from langchain.tools.vectorstore.tool import (
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/tools/vectorstore/tool.py", line 13, in <module>
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/__init__.py", line 19, in <module>
from langchain.chains.loading import load_chain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/loading.py", line 24, in <module>
from langchain.chains.sql_database.base import SQLDatabaseChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/sql_database/base.py", line 15, in <module>
from langchain.sql_database import SQLDatabase
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/sql_database.py", line 8, in <module>
from sqlalchemy import (
ImportError: cannot import name 'CursorResult' from 'sqlalchemy' (/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/sqlalchemy/__init__.py)
``` | https://github.com/langchain-ai/langchain/issues/4129 | https://github.com/langchain-ai/langchain/pull/4145 | 2f087d63af45a172fc363b3370e49141bd663ed2 | fea639c1fc1ac324f1300016a02b6d30a2f8d249 | "2023-05-04T19:24:15Z" | python | "2023-05-05T03:46:38Z" | langchain/sql_database.py | return (
f'Name: {index["name"]}, Unique: {index["unique"]},'
f' Columns: {str(index["column_names"])}'
)
class SQLDatabase:
"""SQLAlchemy wrapper around a database."""
def __init__(
self,
engine: Engine,
schema: Optional[str] = None,
metadata: Optional[MetaData] = None,
ignore_tables: Optional[List[str]] = None,
include_tables: Optional[List[str]] = None,
sample_rows_in_table_info: int = 3,
indexes_in_table_info: bool = False,
custom_table_info: Optional[dict] = None,
view_support: bool = False,
):
"""Create engine from database URI."""
self._engine = engine
self._schema = schema |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,129 | Bug introduced in 0.0.158 | Updates in version 0.0.158 have introduced a bug that prevents this import from being successful, while it works in 0.0.157
```
Traceback (most recent call last):
File "path", line 5, in <module>
from langchain.chains import OpenAIModerationChain, SequentialChain, ConversationChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.tools import InvalidTool
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/tools/__init__.py", line 32, in <module>
from langchain.tools.vectorstore.tool import (
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/tools/vectorstore/tool.py", line 13, in <module>
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/__init__.py", line 19, in <module>
from langchain.chains.loading import load_chain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/loading.py", line 24, in <module>
from langchain.chains.sql_database.base import SQLDatabaseChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/sql_database/base.py", line 15, in <module>
from langchain.sql_database import SQLDatabase
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/sql_database.py", line 8, in <module>
from sqlalchemy import (
ImportError: cannot import name 'CursorResult' from 'sqlalchemy' (/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/sqlalchemy/__init__.py)
``` | https://github.com/langchain-ai/langchain/issues/4129 | https://github.com/langchain-ai/langchain/pull/4145 | 2f087d63af45a172fc363b3370e49141bd663ed2 | fea639c1fc1ac324f1300016a02b6d30a2f8d249 | "2023-05-04T19:24:15Z" | python | "2023-05-05T03:46:38Z" | langchain/sql_database.py | if include_tables and ignore_tables:
raise ValueError("Cannot specify both include_tables and ignore_tables")
self._inspector = inspect(self._engine)
self._all_tables = set(
self._inspector.get_table_names(schema=schema)
+ (self._inspector.get_view_names(schema=schema) if view_support else [])
)
self._include_tables = set(include_tables) if include_tables else set()
if self._include_tables:
missing_tables = self._include_tables - self._all_tables
if missing_tables:
raise ValueError(
f"include_tables {missing_tables} not found in database"
)
self._ignore_tables = set(ignore_tables) if ignore_tables else set()
if self._ignore_tables:
missing_tables = self._ignore_tables - self._all_tables
if missing_tables:
raise ValueError(
f"ignore_tables {missing_tables} not found in database"
)
usable_tables = self.get_usable_table_names()
self._usable_tables = set(usable_tables) if usable_tables else self._all_tables
if not isinstance(sample_rows_in_table_info, int):
raise TypeError("sample_rows_in_table_info must be an integer")
self._sample_rows_in_table_info = sample_rows_in_table_info
self._indexes_in_table_info = indexes_in_table_info
self._custom_table_info = custom_table_info |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,129 | Bug introduced in 0.0.158 | Updates in version 0.0.158 have introduced a bug that prevents this import from being successful, while it works in 0.0.157
```
Traceback (most recent call last):
File "path", line 5, in <module>
from langchain.chains import OpenAIModerationChain, SequentialChain, ConversationChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.tools import InvalidTool
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/tools/__init__.py", line 32, in <module>
from langchain.tools.vectorstore.tool import (
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/tools/vectorstore/tool.py", line 13, in <module>
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/__init__.py", line 19, in <module>
from langchain.chains.loading import load_chain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/loading.py", line 24, in <module>
from langchain.chains.sql_database.base import SQLDatabaseChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/sql_database/base.py", line 15, in <module>
from langchain.sql_database import SQLDatabase
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/sql_database.py", line 8, in <module>
from sqlalchemy import (
ImportError: cannot import name 'CursorResult' from 'sqlalchemy' (/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/sqlalchemy/__init__.py)
``` | https://github.com/langchain-ai/langchain/issues/4129 | https://github.com/langchain-ai/langchain/pull/4145 | 2f087d63af45a172fc363b3370e49141bd663ed2 | fea639c1fc1ac324f1300016a02b6d30a2f8d249 | "2023-05-04T19:24:15Z" | python | "2023-05-05T03:46:38Z" | langchain/sql_database.py | if self._custom_table_info:
if not isinstance(self._custom_table_info, dict):
raise TypeError(
"table_info must be a dictionary with table names as keys and the "
"desired table info as values"
)
intersection = set(self._custom_table_info).intersection(self._all_tables)
self._custom_table_info = dict(
(table, self._custom_table_info[table])
for table in self._custom_table_info
if table in intersection
)
self._metadata = metadata or MetaData()
self._metadata.reflect(
views=view_support,
bind=self._engine,
only=list(self._usable_tables),
schema=self._schema,
)
@classmethod
def from_uri(
cls, database_uri: str, engine_args: Optional[dict] = None, **kwargs: Any
) -> SQLDatabase:
"""Construct a SQLAlchemy engine from URI."""
_engine_args = engine_args or {}
return cls(create_engine(database_uri, **_engine_args), **kwargs)
@property
def dialect(self) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,129 | Bug introduced in 0.0.158 | Updates in version 0.0.158 have introduced a bug that prevents this import from being successful, while it works in 0.0.157
```
Traceback (most recent call last):
File "path", line 5, in <module>
from langchain.chains import OpenAIModerationChain, SequentialChain, ConversationChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.tools import InvalidTool
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/tools/__init__.py", line 32, in <module>
from langchain.tools.vectorstore.tool import (
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/tools/vectorstore/tool.py", line 13, in <module>
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/__init__.py", line 19, in <module>
from langchain.chains.loading import load_chain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/loading.py", line 24, in <module>
from langchain.chains.sql_database.base import SQLDatabaseChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/sql_database/base.py", line 15, in <module>
from langchain.sql_database import SQLDatabase
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/sql_database.py", line 8, in <module>
from sqlalchemy import (
ImportError: cannot import name 'CursorResult' from 'sqlalchemy' (/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/sqlalchemy/__init__.py)
``` | https://github.com/langchain-ai/langchain/issues/4129 | https://github.com/langchain-ai/langchain/pull/4145 | 2f087d63af45a172fc363b3370e49141bd663ed2 | fea639c1fc1ac324f1300016a02b6d30a2f8d249 | "2023-05-04T19:24:15Z" | python | "2023-05-05T03:46:38Z" | langchain/sql_database.py | """Return string representation of dialect to use."""
return self._engine.dialect.name
def get_usable_table_names(self) -> Iterable[str]:
"""Get names of tables available."""
if self._include_tables:
return self._include_tables
return self._all_tables - self._ignore_tables
def get_table_names(self) -> Iterable[str]:
"""Get names of tables available."""
warnings.warn(
"This method is deprecated - please use `get_usable_table_names`."
)
return self.get_usable_table_names()
@property
def table_info(self) -> str:
"""Information about all tables in the database."""
return self.get_table_info()
def get_table_info(self, table_names: Optional[List[str]] = None) -> str:
"""Get information about specified tables.
Follows best practices as specified in: Rajkumar et al, 2022
(https://arxiv.org/abs/2204.00498)
If `sample_rows_in_table_info`, the specified number of sample rows will be
appended to each table description. This can increase performance as
demonstrated in the paper.
"""
all_table_names = self.get_usable_table_names()
if table_names is not None:
missing_tables = set(table_names).difference(all_table_names)
if missing_tables:
raise ValueError(f"table_names {missing_tables} not found in database") |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,129 | Bug introduced in 0.0.158 | Updates in version 0.0.158 have introduced a bug that prevents this import from being successful, while it works in 0.0.157
```
Traceback (most recent call last):
File "path", line 5, in <module>
from langchain.chains import OpenAIModerationChain, SequentialChain, ConversationChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.tools import InvalidTool
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/tools/__init__.py", line 32, in <module>
from langchain.tools.vectorstore.tool import (
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/tools/vectorstore/tool.py", line 13, in <module>
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/__init__.py", line 19, in <module>
from langchain.chains.loading import load_chain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/loading.py", line 24, in <module>
from langchain.chains.sql_database.base import SQLDatabaseChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/sql_database/base.py", line 15, in <module>
from langchain.sql_database import SQLDatabase
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/sql_database.py", line 8, in <module>
from sqlalchemy import (
ImportError: cannot import name 'CursorResult' from 'sqlalchemy' (/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/sqlalchemy/__init__.py)
``` | https://github.com/langchain-ai/langchain/issues/4129 | https://github.com/langchain-ai/langchain/pull/4145 | 2f087d63af45a172fc363b3370e49141bd663ed2 | fea639c1fc1ac324f1300016a02b6d30a2f8d249 | "2023-05-04T19:24:15Z" | python | "2023-05-05T03:46:38Z" | langchain/sql_database.py | all_table_names = table_names
meta_tables = [
tbl
for tbl in self._metadata.sorted_tables
if tbl.name in set(all_table_names)
and not (self.dialect == "sqlite" and tbl.name.startswith("sqlite_"))
]
tables = []
for table in meta_tables:
if self._custom_table_info and table.name in self._custom_table_info:
tables.append(self._custom_table_info[table.name])
continue
create_table = str(CreateTable(table).compile(self._engine))
table_info = f"{create_table.rstrip()}"
has_extra_info = (
self._indexes_in_table_info or self._sample_rows_in_table_info
)
if has_extra_info:
table_info += "\n\n/*"
if self._indexes_in_table_info:
table_info += f"\n{self._get_table_indexes(table)}\n"
if self._sample_rows_in_table_info:
table_info += f"\n{self._get_sample_rows(table)}\n"
if has_extra_info:
table_info += "*/"
tables.append(table_info)
final_str = "\n\n".join(tables)
return final_str
def _get_table_indexes(self, table: Table) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,129 | Bug introduced in 0.0.158 | Updates in version 0.0.158 have introduced a bug that prevents this import from being successful, while it works in 0.0.157
```
Traceback (most recent call last):
File "path", line 5, in <module>
from langchain.chains import OpenAIModerationChain, SequentialChain, ConversationChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.tools import InvalidTool
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/tools/__init__.py", line 32, in <module>
from langchain.tools.vectorstore.tool import (
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/tools/vectorstore/tool.py", line 13, in <module>
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/__init__.py", line 19, in <module>
from langchain.chains.loading import load_chain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/loading.py", line 24, in <module>
from langchain.chains.sql_database.base import SQLDatabaseChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/sql_database/base.py", line 15, in <module>
from langchain.sql_database import SQLDatabase
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/sql_database.py", line 8, in <module>
from sqlalchemy import (
ImportError: cannot import name 'CursorResult' from 'sqlalchemy' (/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/sqlalchemy/__init__.py)
``` | https://github.com/langchain-ai/langchain/issues/4129 | https://github.com/langchain-ai/langchain/pull/4145 | 2f087d63af45a172fc363b3370e49141bd663ed2 | fea639c1fc1ac324f1300016a02b6d30a2f8d249 | "2023-05-04T19:24:15Z" | python | "2023-05-05T03:46:38Z" | langchain/sql_database.py | indexes = self._inspector.get_indexes(table.name)
indexes_formatted = "\n".join(map(_format_index, indexes))
return f"Table Indexes:\n{indexes_formatted}"
def _get_sample_rows(self, table: Table) -> str:
command = select(table).limit(self._sample_rows_in_table_info)
columns_str = "\t".join([col.name for col in table.columns])
try:
with self._engine.connect() as connection:
sample_rows_result: CursorResult = connection.execute(command)
sample_rows = list(
map(lambda ls: [str(i)[:100] for i in ls], sample_rows_result)
)
sample_rows_str = "\n".join(["\t".join(row) for row in sample_rows])
except ProgrammingError:
sample_rows_str = ""
return (
f"{self._sample_rows_in_table_info} rows from {table.name} table:\n"
f"{columns_str}\n"
f"{sample_rows_str}"
)
def run(self, command: str, fetch: str = "all") -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,129 | Bug introduced in 0.0.158 | Updates in version 0.0.158 have introduced a bug that prevents this import from being successful, while it works in 0.0.157
```
Traceback (most recent call last):
File "path", line 5, in <module>
from langchain.chains import OpenAIModerationChain, SequentialChain, ConversationChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.tools import InvalidTool
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/tools/__init__.py", line 32, in <module>
from langchain.tools.vectorstore.tool import (
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/tools/vectorstore/tool.py", line 13, in <module>
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/__init__.py", line 19, in <module>
from langchain.chains.loading import load_chain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/loading.py", line 24, in <module>
from langchain.chains.sql_database.base import SQLDatabaseChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/sql_database/base.py", line 15, in <module>
from langchain.sql_database import SQLDatabase
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/sql_database.py", line 8, in <module>
from sqlalchemy import (
ImportError: cannot import name 'CursorResult' from 'sqlalchemy' (/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/sqlalchemy/__init__.py)
``` | https://github.com/langchain-ai/langchain/issues/4129 | https://github.com/langchain-ai/langchain/pull/4145 | 2f087d63af45a172fc363b3370e49141bd663ed2 | fea639c1fc1ac324f1300016a02b6d30a2f8d249 | "2023-05-04T19:24:15Z" | python | "2023-05-05T03:46:38Z" | langchain/sql_database.py | """Execute a SQL command and return a string representing the results.
If the statement returns rows, a string of the results is returned.
If the statement returns no rows, an empty string is returned.
"""
with self._engine.begin() as connection:
if self._schema is not None:
connection.exec_driver_sql(f"SET search_path TO {self._schema}")
cursor = connection.execute(text(command))
if cursor.returns_rows:
if fetch == "all":
result = cursor.fetchall()
elif fetch == "one":
result = cursor.fetchone()[0]
else:
raise ValueError("Fetch parameter must be either 'one' or 'all'")
return str(result)
return ""
def get_table_info_no_throw(self, table_names: Optional[List[str]] = None) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,129 | Bug introduced in 0.0.158 | Updates in version 0.0.158 have introduced a bug that prevents this import from being successful, while it works in 0.0.157
```
Traceback (most recent call last):
File "path", line 5, in <module>
from langchain.chains import OpenAIModerationChain, SequentialChain, ConversationChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.tools import InvalidTool
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/tools/__init__.py", line 32, in <module>
from langchain.tools.vectorstore.tool import (
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/tools/vectorstore/tool.py", line 13, in <module>
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/__init__.py", line 19, in <module>
from langchain.chains.loading import load_chain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/loading.py", line 24, in <module>
from langchain.chains.sql_database.base import SQLDatabaseChain
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/chains/sql_database/base.py", line 15, in <module>
from langchain.sql_database import SQLDatabase
File "/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/langchain/sql_database.py", line 8, in <module>
from sqlalchemy import (
ImportError: cannot import name 'CursorResult' from 'sqlalchemy' (/Users/chasemcdo/.pyenv/versions/3.11.1/lib/python3.11/site-packages/sqlalchemy/__init__.py)
``` | https://github.com/langchain-ai/langchain/issues/4129 | https://github.com/langchain-ai/langchain/pull/4145 | 2f087d63af45a172fc363b3370e49141bd663ed2 | fea639c1fc1ac324f1300016a02b6d30a2f8d249 | "2023-05-04T19:24:15Z" | python | "2023-05-05T03:46:38Z" | langchain/sql_database.py | """Get information about specified tables.
Follows best practices as specified in: Rajkumar et al, 2022
(https://arxiv.org/abs/2204.00498)
If `sample_rows_in_table_info`, the specified number of sample rows will be
appended to each table description. This can increase performance as
demonstrated in the paper.
"""
try:
return self.get_table_info(table_names)
except ValueError as e:
"""Format the error message"""
return f"Error: {e}"
def run_no_throw(self, command: str, fetch: str = "all") -> str:
"""Execute a SQL command and return a string representing the results.
If the statement returns rows, a string of the results is returned.
If the statement returns no rows, an empty string is returned.
If the statement throws an error, the error message is returned.
"""
try:
return self.run(command, fetch)
except SQLAlchemyError as e:
"""Format the error message"""
return f"Error: {e}" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,120 | [Feature Request] Allow users to pass additional arguments to the WebDriver | Description:
Currently, when creating a Chrome or Firefox web driver using the `selenium.webdriver` module, users can only pass a limited set of arguments such as `headless` mode and hardcoded `no-sandbox`. However, there are many additional options available for these browsers that cannot be passed in using the existing API. I personally was limited by this when I had to add the `--disable-dev-shm-usage` and `--disable-gpu` arguments to the Chrome WebDeriver.
To address this limitation, I propose adding a new `arguments` parameter to the `SeleniumURLLoader` that allows users to pass additional arguments as a list of strings.
| https://github.com/langchain-ai/langchain/issues/4120 | https://github.com/langchain-ai/langchain/pull/4121 | 2a3c5f83537817d06ea8fad2836bbcd1cb33a551 | 19e28d8784adef90553da071ed891fc3252b2c63 | "2023-05-04T18:15:03Z" | python | "2023-05-05T20:24:42Z" | langchain/document_loaders/url_selenium.py | """Loader that uses Selenium to load a page, then uses unstructured to load the html.
"""
import logging
from typing import TYPE_CHECKING, List, Literal, Optional, Union
if TYPE_CHECKING:
from selenium.webdriver import Chrome, Firefox
from langchain.docstore.document import Document
from langchain.document_loaders.base import BaseLoader
logger = logging.getLogger(__name__)
class SeleniumURLLoader(BaseLoader):
"""Loader that uses Selenium and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
Attributes:
urls (List[str]): List of URLs to load.
continue_on_failure (bool): If True, continue loading other URLs on failure.
browser (str): The browser to use, either 'chrome' or 'firefox'.
executable_path (Optional[str]): The path to the browser executable.
headless (bool): If True, the browser will run in headless mode.
"""
def __init__( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,120 | [Feature Request] Allow users to pass additional arguments to the WebDriver | Description:
Currently, when creating a Chrome or Firefox web driver using the `selenium.webdriver` module, users can only pass a limited set of arguments such as `headless` mode and hardcoded `no-sandbox`. However, there are many additional options available for these browsers that cannot be passed in using the existing API. I personally was limited by this when I had to add the `--disable-dev-shm-usage` and `--disable-gpu` arguments to the Chrome WebDeriver.
To address this limitation, I propose adding a new `arguments` parameter to the `SeleniumURLLoader` that allows users to pass additional arguments as a list of strings.
| https://github.com/langchain-ai/langchain/issues/4120 | https://github.com/langchain-ai/langchain/pull/4121 | 2a3c5f83537817d06ea8fad2836bbcd1cb33a551 | 19e28d8784adef90553da071ed891fc3252b2c63 | "2023-05-04T18:15:03Z" | python | "2023-05-05T20:24:42Z" | langchain/document_loaders/url_selenium.py | self,
urls: List[str],
continue_on_failure: bool = True,
browser: Literal["chrome", "firefox"] = "chrome",
executable_path: Optional[str] = None,
headless: bool = True,
):
"""Load a list of URLs using Selenium and unstructured."""
try:
import selenium
except ImportError:
raise ValueError(
"selenium package not found, please install it with "
"`pip install selenium`"
)
try:
import unstructured
except ImportError:
raise ValueError(
"unstructured package not found, please install it with "
"`pip install unstructured`"
)
self.urls = urls
self.continue_on_failure = continue_on_failure
self.browser = browser
self.executable_path = executable_path
self.headless = headless
def _get_driver(self) -> Union["Chrome", "Firefox"]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,120 | [Feature Request] Allow users to pass additional arguments to the WebDriver | Description:
Currently, when creating a Chrome or Firefox web driver using the `selenium.webdriver` module, users can only pass a limited set of arguments such as `headless` mode and hardcoded `no-sandbox`. However, there are many additional options available for these browsers that cannot be passed in using the existing API. I personally was limited by this when I had to add the `--disable-dev-shm-usage` and `--disable-gpu` arguments to the Chrome WebDeriver.
To address this limitation, I propose adding a new `arguments` parameter to the `SeleniumURLLoader` that allows users to pass additional arguments as a list of strings.
| https://github.com/langchain-ai/langchain/issues/4120 | https://github.com/langchain-ai/langchain/pull/4121 | 2a3c5f83537817d06ea8fad2836bbcd1cb33a551 | 19e28d8784adef90553da071ed891fc3252b2c63 | "2023-05-04T18:15:03Z" | python | "2023-05-05T20:24:42Z" | langchain/document_loaders/url_selenium.py | """Create and return a WebDriver instance based on the specified browser.
Raises:
ValueError: If an invalid browser is specified.
Returns:
Union[Chrome, Firefox]: A WebDriver instance for the specified browser.
"""
if self.browser.lower() == "chrome":
from selenium.webdriver import Chrome
from selenium.webdriver.chrome.options import Options as ChromeOptions
chrome_options = ChromeOptions()
if self.headless:
chrome_options.add_argument("--headless")
chrome_options.add_argument("--no-sandbox")
if self.executable_path is None:
return Chrome(options=chrome_options)
return Chrome(executable_path=self.executable_path, options=chrome_options)
elif self.browser.lower() == "firefox":
from selenium.webdriver import Firefox
from selenium.webdriver.firefox.options import Options as FirefoxOptions
firefox_options = FirefoxOptions()
if self.headless:
firefox_options.add_argument("--headless")
if self.executable_path is None:
return Firefox(options=firefox_options)
return Firefox(
executable_path=self.executable_path, options=firefox_options
)
else:
raise ValueError("Invalid browser specified. Use 'chrome' or 'firefox'.")
def load(self) -> List[Document]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,120 | [Feature Request] Allow users to pass additional arguments to the WebDriver | Description:
Currently, when creating a Chrome or Firefox web driver using the `selenium.webdriver` module, users can only pass a limited set of arguments such as `headless` mode and hardcoded `no-sandbox`. However, there are many additional options available for these browsers that cannot be passed in using the existing API. I personally was limited by this when I had to add the `--disable-dev-shm-usage` and `--disable-gpu` arguments to the Chrome WebDeriver.
To address this limitation, I propose adding a new `arguments` parameter to the `SeleniumURLLoader` that allows users to pass additional arguments as a list of strings.
| https://github.com/langchain-ai/langchain/issues/4120 | https://github.com/langchain-ai/langchain/pull/4121 | 2a3c5f83537817d06ea8fad2836bbcd1cb33a551 | 19e28d8784adef90553da071ed891fc3252b2c63 | "2023-05-04T18:15:03Z" | python | "2023-05-05T20:24:42Z" | langchain/document_loaders/url_selenium.py | """Load the specified URLs using Selenium and create Document instances.
Returns:
List[Document]: A list of Document instances with loaded content.
"""
from unstructured.partition.html import partition_html
docs: List[Document] = list()
driver = self._get_driver()
for url in self.urls:
try:
driver.get(url)
page_content = driver.page_source
elements = partition_html(text=page_content)
text = "\n\n".join([str(el) for el in elements])
metadata = {"source": url}
docs.append(Document(page_content=text, metadata=metadata))
except Exception as e:
if self.continue_on_failure:
logger.error(f"Error fetching or processing {url}, exception: {e}")
else:
raise e
driver.quit()
return docs |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,304 | [Feature Request] Allow users to pass binary location to Selenium WebDriver | ### Feature request
Problem:
Unable to set binary_location for the Webdriver via SeleniumURLLoader
Proposal:
The proposal is to adding a new arguments parameter to the SeleniumURLLoader that allows users to pass binary_location
### Motivation
To deploy Selenium on Heroku ([tutorial](https://romik-kelesh.medium.com/how-to-deploy-a-python-web-scraper-with-selenium-on-heroku-1459cb3ac76c)), the browser binary must be installed as a buildpack and its location must be set as the binary_location for the driver browser options. Currently when creating a Chrome or Firefox web driver via SeleniumURLLoader, users cannot set the binary_location of the WebDriver.
### Your contribution
I can submit the PR to add this capability to SeleniumURLLoader | https://github.com/langchain-ai/langchain/issues/4304 | https://github.com/langchain-ai/langchain/pull/4305 | 65c95f9fb2b86cf3281f2f3939b37e71f048f741 | 637c61cffbd279dc2431f9e224cfccec9c81f6cd | "2023-05-07T23:25:37Z" | python | "2023-05-08T15:05:55Z" | langchain/document_loaders/url_selenium.py | """Loader that uses Selenium to load a page, then uses unstructured to load the html.
"""
import logging
from typing import TYPE_CHECKING, List, Literal, Optional, Union
if TYPE_CHECKING:
from selenium.webdriver import Chrome, Firefox
from langchain.docstore.document import Document
from langchain.document_loaders.base import BaseLoader
logger = logging.getLogger(__name__)
class SeleniumURLLoader(BaseLoader):
"""Loader that uses Selenium and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
Attributes:
urls (List[str]): List of URLs to load.
continue_on_failure (bool): If True, continue loading other URLs on failure.
browser (str): The browser to use, either 'chrome' or 'firefox'.
executable_path (Optional[str]): The path to the browser executable.
headless (bool): If True, the browser will run in headless mode.
arguments [List[str]]: List of arguments to pass to the browser.
"""
def __init__( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,304 | [Feature Request] Allow users to pass binary location to Selenium WebDriver | ### Feature request
Problem:
Unable to set binary_location for the Webdriver via SeleniumURLLoader
Proposal:
The proposal is to adding a new arguments parameter to the SeleniumURLLoader that allows users to pass binary_location
### Motivation
To deploy Selenium on Heroku ([tutorial](https://romik-kelesh.medium.com/how-to-deploy-a-python-web-scraper-with-selenium-on-heroku-1459cb3ac76c)), the browser binary must be installed as a buildpack and its location must be set as the binary_location for the driver browser options. Currently when creating a Chrome or Firefox web driver via SeleniumURLLoader, users cannot set the binary_location of the WebDriver.
### Your contribution
I can submit the PR to add this capability to SeleniumURLLoader | https://github.com/langchain-ai/langchain/issues/4304 | https://github.com/langchain-ai/langchain/pull/4305 | 65c95f9fb2b86cf3281f2f3939b37e71f048f741 | 637c61cffbd279dc2431f9e224cfccec9c81f6cd | "2023-05-07T23:25:37Z" | python | "2023-05-08T15:05:55Z" | langchain/document_loaders/url_selenium.py | self,
urls: List[str],
continue_on_failure: bool = True,
browser: Literal["chrome", "firefox"] = "chrome",
executable_path: Optional[str] = None,
headless: bool = True,
arguments: List[str] = [],
):
"""Load a list of URLs using Selenium and unstructured."""
try:
import selenium
except ImportError:
raise ValueError(
"selenium package not found, please install it with "
"`pip install selenium`"
)
try:
import unstructured
except ImportError:
raise ValueError(
"unstructured package not found, please install it with "
"`pip install unstructured`"
)
self.urls = urls
self.continue_on_failure = continue_on_failure
self.browser = browser
self.executable_path = executable_path
self.headless = headless
self.arguments = arguments
def _get_driver(self) -> Union["Chrome", "Firefox"]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,304 | [Feature Request] Allow users to pass binary location to Selenium WebDriver | ### Feature request
Problem:
Unable to set binary_location for the Webdriver via SeleniumURLLoader
Proposal:
The proposal is to adding a new arguments parameter to the SeleniumURLLoader that allows users to pass binary_location
### Motivation
To deploy Selenium on Heroku ([tutorial](https://romik-kelesh.medium.com/how-to-deploy-a-python-web-scraper-with-selenium-on-heroku-1459cb3ac76c)), the browser binary must be installed as a buildpack and its location must be set as the binary_location for the driver browser options. Currently when creating a Chrome or Firefox web driver via SeleniumURLLoader, users cannot set the binary_location of the WebDriver.
### Your contribution
I can submit the PR to add this capability to SeleniumURLLoader | https://github.com/langchain-ai/langchain/issues/4304 | https://github.com/langchain-ai/langchain/pull/4305 | 65c95f9fb2b86cf3281f2f3939b37e71f048f741 | 637c61cffbd279dc2431f9e224cfccec9c81f6cd | "2023-05-07T23:25:37Z" | python | "2023-05-08T15:05:55Z" | langchain/document_loaders/url_selenium.py | """Create and return a WebDriver instance based on the specified browser.
Raises:
ValueError: If an invalid browser is specified.
Returns:
Union[Chrome, Firefox]: A WebDriver instance for the specified browser.
"""
if self.browser.lower() == "chrome":
from selenium.webdriver import Chrome
from selenium.webdriver.chrome.options import Options as ChromeOptions
chrome_options = ChromeOptions()
for arg in self.arguments:
chrome_options.add_argument(arg)
if self.headless:
chrome_options.add_argument("--headless")
chrome_options.add_argument("--no-sandbox")
if self.executable_path is None:
return Chrome(options=chrome_options)
return Chrome(executable_path=self.executable_path, options=chrome_options)
elif self.browser.lower() == "firefox":
from selenium.webdriver import Firefox
from selenium.webdriver.firefox.options import Options as FirefoxOptions
firefox_options = FirefoxOptions()
for arg in self.arguments:
firefox_options.add_argument(arg)
if self.headless:
firefox_options.add_argument("--headless")
if self.executable_path is None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,304 | [Feature Request] Allow users to pass binary location to Selenium WebDriver | ### Feature request
Problem:
Unable to set binary_location for the Webdriver via SeleniumURLLoader
Proposal:
The proposal is to adding a new arguments parameter to the SeleniumURLLoader that allows users to pass binary_location
### Motivation
To deploy Selenium on Heroku ([tutorial](https://romik-kelesh.medium.com/how-to-deploy-a-python-web-scraper-with-selenium-on-heroku-1459cb3ac76c)), the browser binary must be installed as a buildpack and its location must be set as the binary_location for the driver browser options. Currently when creating a Chrome or Firefox web driver via SeleniumURLLoader, users cannot set the binary_location of the WebDriver.
### Your contribution
I can submit the PR to add this capability to SeleniumURLLoader | https://github.com/langchain-ai/langchain/issues/4304 | https://github.com/langchain-ai/langchain/pull/4305 | 65c95f9fb2b86cf3281f2f3939b37e71f048f741 | 637c61cffbd279dc2431f9e224cfccec9c81f6cd | "2023-05-07T23:25:37Z" | python | "2023-05-08T15:05:55Z" | langchain/document_loaders/url_selenium.py | return Firefox(options=firefox_options)
return Firefox(
executable_path=self.executable_path, options=firefox_options
)
else:
raise ValueError("Invalid browser specified. Use 'chrome' or 'firefox'.")
def load(self) -> List[Document]:
"""Load the specified URLs using Selenium and create Document instances.
Returns:
List[Document]: A list of Document instances with loaded content.
"""
from unstructured.partition.html import partition_html
docs: List[Document] = list()
driver = self._get_driver()
for url in self.urls:
try:
driver.get(url)
page_content = driver.page_source
elements = partition_html(text=page_content)
text = "\n\n".join([str(el) for el in elements])
metadata = {"source": url}
docs.append(Document(page_content=text, metadata=metadata))
except Exception as e:
if self.continue_on_failure:
logger.error(f"Error fetching or processing {url}, exception: {e}")
else:
raise e
driver.quit()
return docs |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,096 | Callbacks are ignored when passed to load_tools | Hello,
I cannot figure out how to pass callback when using `load_tools`, I used to pass a callback_manager but I understand that it's now deprecated. I was able to reproduce with the following snippet:
```python
from langchain.agents import load_tools
from langchain.callbacks.base import BaseCallbackHandler
from langchain.tools import ShellTool
class MyCustomHandler(BaseCallbackHandler):
def on_tool_start(self, serialized, input_str: str, **kwargs):
"""Run when tool starts running."""
print("ON TOOL START!")
def on_tool_end(self, output: str, **kwargs):
"""Run when tool ends running."""
print("ON TOOL END!")
# load_tools doesn't works
print("LOAD TOOLS!")
tools = load_tools(["terminal"], callbacks=[MyCustomHandler()])
print(tools[0].run({"commands": ["echo 'Hello World!'", "time"]}))
# direct tool instantiation works
print("Direct tool")
shell_tool = ShellTool(callbacks=[MyCustomHandler()])
print(shell_tool.run({"commands": ["echo 'Hello World!'", "time"]}))
```
Here is the output I'm seeing:
```
LOAD TOOLS!
/home/lothiraldan/project/cometml/langchain/langchain/tools/shell/tool.py:33: UserWarning: The shell tool has no safeguards by default. Use at your own risk.
warnings.warn(
Hello World!
user 0m0,00s
sys 0m0,00s
Direct tool
ON TOOL START!
ON TOOL END!
Hello World!
user 0m0,00s
sys 0m0,00s
```
In this example, when I pass the callbacks to `load_tools`, the `on_tool_*` methods are not called. But maybe it's not the correct way to pass callbacks to the `load_tools` helper.
I reproduced with Langchain master, specifically the following commit https://github.com/hwchase17/langchain/commit/a9c24503309e2e3eb800f335e0fbc7c22531bda0.
Pip list output:
```
Package Version Editable project location
----------------------- --------- -------------------------------------------
aiohttp 3.8.4
aiosignal 1.3.1
async-timeout 4.0.2
attrs 23.1.0
certifi 2022.12.7
charset-normalizer 3.1.0
dataclasses-json 0.5.7
frozenlist 1.3.3
greenlet 2.0.2
idna 3.4
langchain 0.0.157 /home/lothiraldan/project/cometml/langchain
marshmallow 3.19.0
marshmallow-enum 1.5.1
multidict 6.0.4
mypy-extensions 1.0.0
numexpr 2.8.4
numpy 1.24.3
openai 0.27.6
openapi-schema-pydantic 1.2.4
packaging 23.1
pip 23.0.1
pydantic 1.10.7
PyYAML 6.0
requests 2.29.0
setuptools 67.6.1
SQLAlchemy 2.0.12
tenacity 8.2.2
tqdm 4.65.0
typing_extensions 4.5.0
typing-inspect 0.8.0
urllib3 1.26.15
wheel 0.40.0
yarl 1.9.2
``` | https://github.com/langchain-ai/langchain/issues/4096 | https://github.com/langchain-ai/langchain/pull/4298 | 0870a45a697a75ac839b724311ce7a8b59a09058 | 35c9e6ab407003e0c1f16fcf6d4c73f6637db731 | "2023-05-04T09:05:12Z" | python | "2023-05-08T15:44:26Z" | langchain/agents/load_tools.py | """Load tools."""
import warnings
from typing import Any, Dict, List, Optional, Callable, Tuple
from mypy_extensions import Arg, KwArg
from langchain.agents.tools import Tool
from langchain.base_language import BaseLanguageModel
from langchain.callbacks.base import BaseCallbackManager
from langchain.chains.api import news_docs, open_meteo_docs, podcast_docs, tmdb_docs
from langchain.chains.api.base import APIChain
from langchain.chains.llm_math.base import LLMMathChain
from langchain.chains.pal.base import PALChain
from langchain.requests import TextRequestsWrapper
from langchain.tools.arxiv.tool import ArxivQueryRun
from langchain.tools.base import BaseTool
from langchain.tools.bing_search.tool import BingSearchRun
from langchain.tools.ddg_search.tool import DuckDuckGoSearchRun
from langchain.tools.google_search.tool import GoogleSearchResults, GoogleSearchRun
from langchain.tools.google_serper.tool import GoogleSerperResults, GoogleSerperRun
from langchain.tools.human.tool import HumanInputRun |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,096 | Callbacks are ignored when passed to load_tools | Hello,
I cannot figure out how to pass callback when using `load_tools`, I used to pass a callback_manager but I understand that it's now deprecated. I was able to reproduce with the following snippet:
```python
from langchain.agents import load_tools
from langchain.callbacks.base import BaseCallbackHandler
from langchain.tools import ShellTool
class MyCustomHandler(BaseCallbackHandler):
def on_tool_start(self, serialized, input_str: str, **kwargs):
"""Run when tool starts running."""
print("ON TOOL START!")
def on_tool_end(self, output: str, **kwargs):
"""Run when tool ends running."""
print("ON TOOL END!")
# load_tools doesn't works
print("LOAD TOOLS!")
tools = load_tools(["terminal"], callbacks=[MyCustomHandler()])
print(tools[0].run({"commands": ["echo 'Hello World!'", "time"]}))
# direct tool instantiation works
print("Direct tool")
shell_tool = ShellTool(callbacks=[MyCustomHandler()])
print(shell_tool.run({"commands": ["echo 'Hello World!'", "time"]}))
```
Here is the output I'm seeing:
```
LOAD TOOLS!
/home/lothiraldan/project/cometml/langchain/langchain/tools/shell/tool.py:33: UserWarning: The shell tool has no safeguards by default. Use at your own risk.
warnings.warn(
Hello World!
user 0m0,00s
sys 0m0,00s
Direct tool
ON TOOL START!
ON TOOL END!
Hello World!
user 0m0,00s
sys 0m0,00s
```
In this example, when I pass the callbacks to `load_tools`, the `on_tool_*` methods are not called. But maybe it's not the correct way to pass callbacks to the `load_tools` helper.
I reproduced with Langchain master, specifically the following commit https://github.com/hwchase17/langchain/commit/a9c24503309e2e3eb800f335e0fbc7c22531bda0.
Pip list output:
```
Package Version Editable project location
----------------------- --------- -------------------------------------------
aiohttp 3.8.4
aiosignal 1.3.1
async-timeout 4.0.2
attrs 23.1.0
certifi 2022.12.7
charset-normalizer 3.1.0
dataclasses-json 0.5.7
frozenlist 1.3.3
greenlet 2.0.2
idna 3.4
langchain 0.0.157 /home/lothiraldan/project/cometml/langchain
marshmallow 3.19.0
marshmallow-enum 1.5.1
multidict 6.0.4
mypy-extensions 1.0.0
numexpr 2.8.4
numpy 1.24.3
openai 0.27.6
openapi-schema-pydantic 1.2.4
packaging 23.1
pip 23.0.1
pydantic 1.10.7
PyYAML 6.0
requests 2.29.0
setuptools 67.6.1
SQLAlchemy 2.0.12
tenacity 8.2.2
tqdm 4.65.0
typing_extensions 4.5.0
typing-inspect 0.8.0
urllib3 1.26.15
wheel 0.40.0
yarl 1.9.2
``` | https://github.com/langchain-ai/langchain/issues/4096 | https://github.com/langchain-ai/langchain/pull/4298 | 0870a45a697a75ac839b724311ce7a8b59a09058 | 35c9e6ab407003e0c1f16fcf6d4c73f6637db731 | "2023-05-04T09:05:12Z" | python | "2023-05-08T15:44:26Z" | langchain/agents/load_tools.py | from langchain.tools.python.tool import PythonREPLTool
from langchain.tools.requests.tool import (
RequestsDeleteTool,
RequestsGetTool,
RequestsPatchTool,
RequestsPostTool,
RequestsPutTool,
)
from langchain.tools.scenexplain.tool import SceneXplainTool
from langchain.tools.searx_search.tool import SearxSearchResults, SearxSearchRun
from langchain.tools.shell.tool import ShellTool
from langchain.tools.wikipedia.tool import WikipediaQueryRun
from langchain.tools.wolfram_alpha.tool import WolframAlphaQueryRun
from langchain.utilities import ArxivAPIWrapper
from langchain.utilities.bing_search import BingSearchAPIWrapper
from langchain.utilities.duckduckgo_search import DuckDuckGoSearchAPIWrapper
from langchain.utilities.google_search import GoogleSearchAPIWrapper
from langchain.utilities.google_serper import GoogleSerperAPIWrapper
from langchain.utilities.awslambda import LambdaWrapper
from langchain.utilities.searx_search import SearxSearchWrapper
from langchain.utilities.serpapi import SerpAPIWrapper
from langchain.utilities.wikipedia import WikipediaAPIWrapper
from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper
def _get_python_repl() -> BaseTool:
return PythonREPLTool()
def _get_tools_requests_get() -> BaseTool:
return RequestsGetTool(requests_wrapper=TextRequestsWrapper())
def _get_tools_requests_post() -> BaseTool:
return RequestsPostTool(requests_wrapper=TextRequestsWrapper())
def _get_tools_requests_patch() -> BaseTool: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,096 | Callbacks are ignored when passed to load_tools | Hello,
I cannot figure out how to pass callback when using `load_tools`, I used to pass a callback_manager but I understand that it's now deprecated. I was able to reproduce with the following snippet:
```python
from langchain.agents import load_tools
from langchain.callbacks.base import BaseCallbackHandler
from langchain.tools import ShellTool
class MyCustomHandler(BaseCallbackHandler):
def on_tool_start(self, serialized, input_str: str, **kwargs):
"""Run when tool starts running."""
print("ON TOOL START!")
def on_tool_end(self, output: str, **kwargs):
"""Run when tool ends running."""
print("ON TOOL END!")
# load_tools doesn't works
print("LOAD TOOLS!")
tools = load_tools(["terminal"], callbacks=[MyCustomHandler()])
print(tools[0].run({"commands": ["echo 'Hello World!'", "time"]}))
# direct tool instantiation works
print("Direct tool")
shell_tool = ShellTool(callbacks=[MyCustomHandler()])
print(shell_tool.run({"commands": ["echo 'Hello World!'", "time"]}))
```
Here is the output I'm seeing:
```
LOAD TOOLS!
/home/lothiraldan/project/cometml/langchain/langchain/tools/shell/tool.py:33: UserWarning: The shell tool has no safeguards by default. Use at your own risk.
warnings.warn(
Hello World!
user 0m0,00s
sys 0m0,00s
Direct tool
ON TOOL START!
ON TOOL END!
Hello World!
user 0m0,00s
sys 0m0,00s
```
In this example, when I pass the callbacks to `load_tools`, the `on_tool_*` methods are not called. But maybe it's not the correct way to pass callbacks to the `load_tools` helper.
I reproduced with Langchain master, specifically the following commit https://github.com/hwchase17/langchain/commit/a9c24503309e2e3eb800f335e0fbc7c22531bda0.
Pip list output:
```
Package Version Editable project location
----------------------- --------- -------------------------------------------
aiohttp 3.8.4
aiosignal 1.3.1
async-timeout 4.0.2
attrs 23.1.0
certifi 2022.12.7
charset-normalizer 3.1.0
dataclasses-json 0.5.7
frozenlist 1.3.3
greenlet 2.0.2
idna 3.4
langchain 0.0.157 /home/lothiraldan/project/cometml/langchain
marshmallow 3.19.0
marshmallow-enum 1.5.1
multidict 6.0.4
mypy-extensions 1.0.0
numexpr 2.8.4
numpy 1.24.3
openai 0.27.6
openapi-schema-pydantic 1.2.4
packaging 23.1
pip 23.0.1
pydantic 1.10.7
PyYAML 6.0
requests 2.29.0
setuptools 67.6.1
SQLAlchemy 2.0.12
tenacity 8.2.2
tqdm 4.65.0
typing_extensions 4.5.0
typing-inspect 0.8.0
urllib3 1.26.15
wheel 0.40.0
yarl 1.9.2
``` | https://github.com/langchain-ai/langchain/issues/4096 | https://github.com/langchain-ai/langchain/pull/4298 | 0870a45a697a75ac839b724311ce7a8b59a09058 | 35c9e6ab407003e0c1f16fcf6d4c73f6637db731 | "2023-05-04T09:05:12Z" | python | "2023-05-08T15:44:26Z" | langchain/agents/load_tools.py | return RequestsPatchTool(requests_wrapper=TextRequestsWrapper())
def _get_tools_requests_put() -> BaseTool:
return RequestsPutTool(requests_wrapper=TextRequestsWrapper())
def _get_tools_requests_delete() -> BaseTool:
return RequestsDeleteTool(requests_wrapper=TextRequestsWrapper())
def _get_terminal() -> BaseTool:
return ShellTool()
_BASE_TOOLS: Dict[str, Callable[[], BaseTool]] = {
"python_repl": _get_python_repl,
"requests": _get_tools_requests_get,
"requests_get": _get_tools_requests_get,
"requests_post": _get_tools_requests_post,
"requests_patch": _get_tools_requests_patch,
"requests_put": _get_tools_requests_put,
"requests_delete": _get_tools_requests_delete,
"terminal": _get_terminal,
}
def _get_pal_math(llm: BaseLanguageModel) -> BaseTool:
return Tool(
name="PAL-MATH",
description="A language model that is really good at solving complex word math problems. Input should be a fully worded hard word math problem.",
func=PALChain.from_math_prompt(llm).run,
)
def _get_pal_colored_objects(llm: BaseLanguageModel) -> BaseTool:
return Tool(
name="PAL-COLOR-OBJ",
description="A language model that is really good at reasoning about position and the color attributes of objects. Input should be a fully worded hard reasoning problem. Make sure to include all information about the objects AND the final question you want to answer.",
func=PALChain.from_colored_object_prompt(llm).run,
)
def _get_llm_math(llm: BaseLanguageModel) -> BaseTool: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,096 | Callbacks are ignored when passed to load_tools | Hello,
I cannot figure out how to pass callback when using `load_tools`, I used to pass a callback_manager but I understand that it's now deprecated. I was able to reproduce with the following snippet:
```python
from langchain.agents import load_tools
from langchain.callbacks.base import BaseCallbackHandler
from langchain.tools import ShellTool
class MyCustomHandler(BaseCallbackHandler):
def on_tool_start(self, serialized, input_str: str, **kwargs):
"""Run when tool starts running."""
print("ON TOOL START!")
def on_tool_end(self, output: str, **kwargs):
"""Run when tool ends running."""
print("ON TOOL END!")
# load_tools doesn't works
print("LOAD TOOLS!")
tools = load_tools(["terminal"], callbacks=[MyCustomHandler()])
print(tools[0].run({"commands": ["echo 'Hello World!'", "time"]}))
# direct tool instantiation works
print("Direct tool")
shell_tool = ShellTool(callbacks=[MyCustomHandler()])
print(shell_tool.run({"commands": ["echo 'Hello World!'", "time"]}))
```
Here is the output I'm seeing:
```
LOAD TOOLS!
/home/lothiraldan/project/cometml/langchain/langchain/tools/shell/tool.py:33: UserWarning: The shell tool has no safeguards by default. Use at your own risk.
warnings.warn(
Hello World!
user 0m0,00s
sys 0m0,00s
Direct tool
ON TOOL START!
ON TOOL END!
Hello World!
user 0m0,00s
sys 0m0,00s
```
In this example, when I pass the callbacks to `load_tools`, the `on_tool_*` methods are not called. But maybe it's not the correct way to pass callbacks to the `load_tools` helper.
I reproduced with Langchain master, specifically the following commit https://github.com/hwchase17/langchain/commit/a9c24503309e2e3eb800f335e0fbc7c22531bda0.
Pip list output:
```
Package Version Editable project location
----------------------- --------- -------------------------------------------
aiohttp 3.8.4
aiosignal 1.3.1
async-timeout 4.0.2
attrs 23.1.0
certifi 2022.12.7
charset-normalizer 3.1.0
dataclasses-json 0.5.7
frozenlist 1.3.3
greenlet 2.0.2
idna 3.4
langchain 0.0.157 /home/lothiraldan/project/cometml/langchain
marshmallow 3.19.0
marshmallow-enum 1.5.1
multidict 6.0.4
mypy-extensions 1.0.0
numexpr 2.8.4
numpy 1.24.3
openai 0.27.6
openapi-schema-pydantic 1.2.4
packaging 23.1
pip 23.0.1
pydantic 1.10.7
PyYAML 6.0
requests 2.29.0
setuptools 67.6.1
SQLAlchemy 2.0.12
tenacity 8.2.2
tqdm 4.65.0
typing_extensions 4.5.0
typing-inspect 0.8.0
urllib3 1.26.15
wheel 0.40.0
yarl 1.9.2
``` | https://github.com/langchain-ai/langchain/issues/4096 | https://github.com/langchain-ai/langchain/pull/4298 | 0870a45a697a75ac839b724311ce7a8b59a09058 | 35c9e6ab407003e0c1f16fcf6d4c73f6637db731 | "2023-05-04T09:05:12Z" | python | "2023-05-08T15:44:26Z" | langchain/agents/load_tools.py | return Tool(
name="Calculator",
description="Useful for when you need to answer questions about math.",
func=LLMMathChain.from_llm(llm=llm).run,
coroutine=LLMMathChain.from_llm(llm=llm).arun,
)
def _get_open_meteo_api(llm: BaseLanguageModel) -> BaseTool:
chain = APIChain.from_llm_and_api_docs(llm, open_meteo_docs.OPEN_METEO_DOCS)
return Tool(
name="Open Meteo API",
description="Useful for when you want to get weather information from the OpenMeteo API. The input should be a question in natural language that this API can answer.",
func=chain.run,
)
_LLM_TOOLS: Dict[str, Callable[[BaseLanguageModel], BaseTool]] = {
"pal-math": _get_pal_math,
"pal-colored-objects": _get_pal_colored_objects,
"llm-math": _get_llm_math,
"open-meteo-api": _get_open_meteo_api,
}
def _get_news_api(llm: BaseLanguageModel, **kwargs: Any) -> BaseTool:
news_api_key = kwargs["news_api_key"]
chain = APIChain.from_llm_and_api_docs(
llm, news_docs.NEWS_DOCS, headers={"X-Api-Key": news_api_key}
)
return Tool(
name="News API",
description="Use this when you want to get information about the top headlines of current news stories. The input should be a question in natural language that this API can answer.",
func=chain.run,
)
def _get_tmdb_api(llm: BaseLanguageModel, **kwargs: Any) -> BaseTool: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,096 | Callbacks are ignored when passed to load_tools | Hello,
I cannot figure out how to pass callback when using `load_tools`, I used to pass a callback_manager but I understand that it's now deprecated. I was able to reproduce with the following snippet:
```python
from langchain.agents import load_tools
from langchain.callbacks.base import BaseCallbackHandler
from langchain.tools import ShellTool
class MyCustomHandler(BaseCallbackHandler):
def on_tool_start(self, serialized, input_str: str, **kwargs):
"""Run when tool starts running."""
print("ON TOOL START!")
def on_tool_end(self, output: str, **kwargs):
"""Run when tool ends running."""
print("ON TOOL END!")
# load_tools doesn't works
print("LOAD TOOLS!")
tools = load_tools(["terminal"], callbacks=[MyCustomHandler()])
print(tools[0].run({"commands": ["echo 'Hello World!'", "time"]}))
# direct tool instantiation works
print("Direct tool")
shell_tool = ShellTool(callbacks=[MyCustomHandler()])
print(shell_tool.run({"commands": ["echo 'Hello World!'", "time"]}))
```
Here is the output I'm seeing:
```
LOAD TOOLS!
/home/lothiraldan/project/cometml/langchain/langchain/tools/shell/tool.py:33: UserWarning: The shell tool has no safeguards by default. Use at your own risk.
warnings.warn(
Hello World!
user 0m0,00s
sys 0m0,00s
Direct tool
ON TOOL START!
ON TOOL END!
Hello World!
user 0m0,00s
sys 0m0,00s
```
In this example, when I pass the callbacks to `load_tools`, the `on_tool_*` methods are not called. But maybe it's not the correct way to pass callbacks to the `load_tools` helper.
I reproduced with Langchain master, specifically the following commit https://github.com/hwchase17/langchain/commit/a9c24503309e2e3eb800f335e0fbc7c22531bda0.
Pip list output:
```
Package Version Editable project location
----------------------- --------- -------------------------------------------
aiohttp 3.8.4
aiosignal 1.3.1
async-timeout 4.0.2
attrs 23.1.0
certifi 2022.12.7
charset-normalizer 3.1.0
dataclasses-json 0.5.7
frozenlist 1.3.3
greenlet 2.0.2
idna 3.4
langchain 0.0.157 /home/lothiraldan/project/cometml/langchain
marshmallow 3.19.0
marshmallow-enum 1.5.1
multidict 6.0.4
mypy-extensions 1.0.0
numexpr 2.8.4
numpy 1.24.3
openai 0.27.6
openapi-schema-pydantic 1.2.4
packaging 23.1
pip 23.0.1
pydantic 1.10.7
PyYAML 6.0
requests 2.29.0
setuptools 67.6.1
SQLAlchemy 2.0.12
tenacity 8.2.2
tqdm 4.65.0
typing_extensions 4.5.0
typing-inspect 0.8.0
urllib3 1.26.15
wheel 0.40.0
yarl 1.9.2
``` | https://github.com/langchain-ai/langchain/issues/4096 | https://github.com/langchain-ai/langchain/pull/4298 | 0870a45a697a75ac839b724311ce7a8b59a09058 | 35c9e6ab407003e0c1f16fcf6d4c73f6637db731 | "2023-05-04T09:05:12Z" | python | "2023-05-08T15:44:26Z" | langchain/agents/load_tools.py | tmdb_bearer_token = kwargs["tmdb_bearer_token"]
chain = APIChain.from_llm_and_api_docs(
llm,
tmdb_docs.TMDB_DOCS,
headers={"Authorization": f"Bearer {tmdb_bearer_token}"},
)
return Tool(
name="TMDB API",
description="Useful for when you want to get information from The Movie Database. The input should be a question in natural language that this API can answer.",
func=chain.run,
)
def _get_podcast_api(llm: BaseLanguageModel, **kwargs: Any) -> BaseTool:
listen_api_key = kwargs["listen_api_key"]
chain = APIChain.from_llm_and_api_docs(
llm,
podcast_docs.PODCAST_DOCS,
headers={"X-ListenAPI-Key": listen_api_key},
)
return Tool(
name="Podcast API",
description="Use the Listen Notes Podcast API to search all podcasts or episodes. The input should be a question in natural language that this API can answer.",
func=chain.run,
)
def _get_lambda_api(**kwargs: Any) -> BaseTool:
return Tool(
name=kwargs["awslambda_tool_name"],
description=kwargs["awslambda_tool_description"],
func=LambdaWrapper(**kwargs).run,
)
def _get_wolfram_alpha(**kwargs: Any) -> BaseTool: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,096 | Callbacks are ignored when passed to load_tools | Hello,
I cannot figure out how to pass callback when using `load_tools`, I used to pass a callback_manager but I understand that it's now deprecated. I was able to reproduce with the following snippet:
```python
from langchain.agents import load_tools
from langchain.callbacks.base import BaseCallbackHandler
from langchain.tools import ShellTool
class MyCustomHandler(BaseCallbackHandler):
def on_tool_start(self, serialized, input_str: str, **kwargs):
"""Run when tool starts running."""
print("ON TOOL START!")
def on_tool_end(self, output: str, **kwargs):
"""Run when tool ends running."""
print("ON TOOL END!")
# load_tools doesn't works
print("LOAD TOOLS!")
tools = load_tools(["terminal"], callbacks=[MyCustomHandler()])
print(tools[0].run({"commands": ["echo 'Hello World!'", "time"]}))
# direct tool instantiation works
print("Direct tool")
shell_tool = ShellTool(callbacks=[MyCustomHandler()])
print(shell_tool.run({"commands": ["echo 'Hello World!'", "time"]}))
```
Here is the output I'm seeing:
```
LOAD TOOLS!
/home/lothiraldan/project/cometml/langchain/langchain/tools/shell/tool.py:33: UserWarning: The shell tool has no safeguards by default. Use at your own risk.
warnings.warn(
Hello World!
user 0m0,00s
sys 0m0,00s
Direct tool
ON TOOL START!
ON TOOL END!
Hello World!
user 0m0,00s
sys 0m0,00s
```
In this example, when I pass the callbacks to `load_tools`, the `on_tool_*` methods are not called. But maybe it's not the correct way to pass callbacks to the `load_tools` helper.
I reproduced with Langchain master, specifically the following commit https://github.com/hwchase17/langchain/commit/a9c24503309e2e3eb800f335e0fbc7c22531bda0.
Pip list output:
```
Package Version Editable project location
----------------------- --------- -------------------------------------------
aiohttp 3.8.4
aiosignal 1.3.1
async-timeout 4.0.2
attrs 23.1.0
certifi 2022.12.7
charset-normalizer 3.1.0
dataclasses-json 0.5.7
frozenlist 1.3.3
greenlet 2.0.2
idna 3.4
langchain 0.0.157 /home/lothiraldan/project/cometml/langchain
marshmallow 3.19.0
marshmallow-enum 1.5.1
multidict 6.0.4
mypy-extensions 1.0.0
numexpr 2.8.4
numpy 1.24.3
openai 0.27.6
openapi-schema-pydantic 1.2.4
packaging 23.1
pip 23.0.1
pydantic 1.10.7
PyYAML 6.0
requests 2.29.0
setuptools 67.6.1
SQLAlchemy 2.0.12
tenacity 8.2.2
tqdm 4.65.0
typing_extensions 4.5.0
typing-inspect 0.8.0
urllib3 1.26.15
wheel 0.40.0
yarl 1.9.2
``` | https://github.com/langchain-ai/langchain/issues/4096 | https://github.com/langchain-ai/langchain/pull/4298 | 0870a45a697a75ac839b724311ce7a8b59a09058 | 35c9e6ab407003e0c1f16fcf6d4c73f6637db731 | "2023-05-04T09:05:12Z" | python | "2023-05-08T15:44:26Z" | langchain/agents/load_tools.py | return WolframAlphaQueryRun(api_wrapper=WolframAlphaAPIWrapper(**kwargs))
def _get_google_search(**kwargs: Any) -> BaseTool:
return GoogleSearchRun(api_wrapper=GoogleSearchAPIWrapper(**kwargs))
def _get_wikipedia(**kwargs: Any) -> BaseTool:
return WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper(**kwargs))
def _get_arxiv(**kwargs: Any) -> BaseTool:
return ArxivQueryRun(api_wrapper=ArxivAPIWrapper(**kwargs))
def _get_google_serper(**kwargs: Any) -> BaseTool:
return GoogleSerperRun(api_wrapper=GoogleSerperAPIWrapper(**kwargs))
def _get_google_serper_results_json(**kwargs: Any) -> BaseTool:
return GoogleSerperResults(api_wrapper=GoogleSerperAPIWrapper(**kwargs))
def _get_google_search_results_json(**kwargs: Any) -> BaseTool:
return GoogleSearchResults(api_wrapper=GoogleSearchAPIWrapper(**kwargs))
def _get_serpapi(**kwargs: Any) -> BaseTool:
return Tool(
name="Search",
description="A search engine. Useful for when you need to answer questions about current events. Input should be a search query.",
func=SerpAPIWrapper(**kwargs).run,
coroutine=SerpAPIWrapper(**kwargs).arun,
)
def _get_searx_search(**kwargs: Any) -> BaseTool:
return SearxSearchRun(wrapper=SearxSearchWrapper(**kwargs))
def _get_searx_search_results_json(**kwargs: Any) -> BaseTool: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,096 | Callbacks are ignored when passed to load_tools | Hello,
I cannot figure out how to pass callback when using `load_tools`, I used to pass a callback_manager but I understand that it's now deprecated. I was able to reproduce with the following snippet:
```python
from langchain.agents import load_tools
from langchain.callbacks.base import BaseCallbackHandler
from langchain.tools import ShellTool
class MyCustomHandler(BaseCallbackHandler):
def on_tool_start(self, serialized, input_str: str, **kwargs):
"""Run when tool starts running."""
print("ON TOOL START!")
def on_tool_end(self, output: str, **kwargs):
"""Run when tool ends running."""
print("ON TOOL END!")
# load_tools doesn't works
print("LOAD TOOLS!")
tools = load_tools(["terminal"], callbacks=[MyCustomHandler()])
print(tools[0].run({"commands": ["echo 'Hello World!'", "time"]}))
# direct tool instantiation works
print("Direct tool")
shell_tool = ShellTool(callbacks=[MyCustomHandler()])
print(shell_tool.run({"commands": ["echo 'Hello World!'", "time"]}))
```
Here is the output I'm seeing:
```
LOAD TOOLS!
/home/lothiraldan/project/cometml/langchain/langchain/tools/shell/tool.py:33: UserWarning: The shell tool has no safeguards by default. Use at your own risk.
warnings.warn(
Hello World!
user 0m0,00s
sys 0m0,00s
Direct tool
ON TOOL START!
ON TOOL END!
Hello World!
user 0m0,00s
sys 0m0,00s
```
In this example, when I pass the callbacks to `load_tools`, the `on_tool_*` methods are not called. But maybe it's not the correct way to pass callbacks to the `load_tools` helper.
I reproduced with Langchain master, specifically the following commit https://github.com/hwchase17/langchain/commit/a9c24503309e2e3eb800f335e0fbc7c22531bda0.
Pip list output:
```
Package Version Editable project location
----------------------- --------- -------------------------------------------
aiohttp 3.8.4
aiosignal 1.3.1
async-timeout 4.0.2
attrs 23.1.0
certifi 2022.12.7
charset-normalizer 3.1.0
dataclasses-json 0.5.7
frozenlist 1.3.3
greenlet 2.0.2
idna 3.4
langchain 0.0.157 /home/lothiraldan/project/cometml/langchain
marshmallow 3.19.0
marshmallow-enum 1.5.1
multidict 6.0.4
mypy-extensions 1.0.0
numexpr 2.8.4
numpy 1.24.3
openai 0.27.6
openapi-schema-pydantic 1.2.4
packaging 23.1
pip 23.0.1
pydantic 1.10.7
PyYAML 6.0
requests 2.29.0
setuptools 67.6.1
SQLAlchemy 2.0.12
tenacity 8.2.2
tqdm 4.65.0
typing_extensions 4.5.0
typing-inspect 0.8.0
urllib3 1.26.15
wheel 0.40.0
yarl 1.9.2
``` | https://github.com/langchain-ai/langchain/issues/4096 | https://github.com/langchain-ai/langchain/pull/4298 | 0870a45a697a75ac839b724311ce7a8b59a09058 | 35c9e6ab407003e0c1f16fcf6d4c73f6637db731 | "2023-05-04T09:05:12Z" | python | "2023-05-08T15:44:26Z" | langchain/agents/load_tools.py | wrapper_kwargs = {k: v for k, v in kwargs.items() if k != "num_results"}
return SearxSearchResults(wrapper=SearxSearchWrapper(**wrapper_kwargs), **kwargs)
def _get_bing_search(**kwargs: Any) -> BaseTool:
return BingSearchRun(api_wrapper=BingSearchAPIWrapper(**kwargs))
def _get_ddg_search(**kwargs: Any) -> BaseTool:
return DuckDuckGoSearchRun(api_wrapper=DuckDuckGoSearchAPIWrapper(**kwargs))
def _get_human_tool(**kwargs: Any) -> BaseTool:
return HumanInputRun(**kwargs)
def _get_scenexplain(**kwargs: Any) -> BaseTool:
return SceneXplainTool(**kwargs)
_EXTRA_LLM_TOOLS: Dict[
str,
Tuple[Callable[[Arg(BaseLanguageModel, "llm"), KwArg(Any)], BaseTool], List[str]],
] = {
"news-api": (_get_news_api, ["news_api_key"]),
"tmdb-api": (_get_tmdb_api, ["tmdb_bearer_token"]),
"podcast-api": (_get_podcast_api, ["listen_api_key"]),
}
_EXTRA_OPTIONAL_TOOLS: Dict[str, Tuple[Callable[[KwArg(Any)], BaseTool], List[str]]] = {
"wolfram-alpha": (_get_wolfram_alpha, ["wolfram_alpha_appid"]),
"google-search": (_get_google_search, ["google_api_key", "google_cse_id"]),
"google-search-results-json": (
_get_google_search_results_json,
["google_api_key", "google_cse_id", "num_results"],
),
"searx-search-results-json": (
_get_searx_search_results_json,
["searx_host", "engines", "num_results", "aiosession"],
),
"bing-search": (_get_bing_search, ["bing_subscription_key", "bing_search_url"]), |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,096 | Callbacks are ignored when passed to load_tools | Hello,
I cannot figure out how to pass callback when using `load_tools`, I used to pass a callback_manager but I understand that it's now deprecated. I was able to reproduce with the following snippet:
```python
from langchain.agents import load_tools
from langchain.callbacks.base import BaseCallbackHandler
from langchain.tools import ShellTool
class MyCustomHandler(BaseCallbackHandler):
def on_tool_start(self, serialized, input_str: str, **kwargs):
"""Run when tool starts running."""
print("ON TOOL START!")
def on_tool_end(self, output: str, **kwargs):
"""Run when tool ends running."""
print("ON TOOL END!")
# load_tools doesn't works
print("LOAD TOOLS!")
tools = load_tools(["terminal"], callbacks=[MyCustomHandler()])
print(tools[0].run({"commands": ["echo 'Hello World!'", "time"]}))
# direct tool instantiation works
print("Direct tool")
shell_tool = ShellTool(callbacks=[MyCustomHandler()])
print(shell_tool.run({"commands": ["echo 'Hello World!'", "time"]}))
```
Here is the output I'm seeing:
```
LOAD TOOLS!
/home/lothiraldan/project/cometml/langchain/langchain/tools/shell/tool.py:33: UserWarning: The shell tool has no safeguards by default. Use at your own risk.
warnings.warn(
Hello World!
user 0m0,00s
sys 0m0,00s
Direct tool
ON TOOL START!
ON TOOL END!
Hello World!
user 0m0,00s
sys 0m0,00s
```
In this example, when I pass the callbacks to `load_tools`, the `on_tool_*` methods are not called. But maybe it's not the correct way to pass callbacks to the `load_tools` helper.
I reproduced with Langchain master, specifically the following commit https://github.com/hwchase17/langchain/commit/a9c24503309e2e3eb800f335e0fbc7c22531bda0.
Pip list output:
```
Package Version Editable project location
----------------------- --------- -------------------------------------------
aiohttp 3.8.4
aiosignal 1.3.1
async-timeout 4.0.2
attrs 23.1.0
certifi 2022.12.7
charset-normalizer 3.1.0
dataclasses-json 0.5.7
frozenlist 1.3.3
greenlet 2.0.2
idna 3.4
langchain 0.0.157 /home/lothiraldan/project/cometml/langchain
marshmallow 3.19.0
marshmallow-enum 1.5.1
multidict 6.0.4
mypy-extensions 1.0.0
numexpr 2.8.4
numpy 1.24.3
openai 0.27.6
openapi-schema-pydantic 1.2.4
packaging 23.1
pip 23.0.1
pydantic 1.10.7
PyYAML 6.0
requests 2.29.0
setuptools 67.6.1
SQLAlchemy 2.0.12
tenacity 8.2.2
tqdm 4.65.0
typing_extensions 4.5.0
typing-inspect 0.8.0
urllib3 1.26.15
wheel 0.40.0
yarl 1.9.2
``` | https://github.com/langchain-ai/langchain/issues/4096 | https://github.com/langchain-ai/langchain/pull/4298 | 0870a45a697a75ac839b724311ce7a8b59a09058 | 35c9e6ab407003e0c1f16fcf6d4c73f6637db731 | "2023-05-04T09:05:12Z" | python | "2023-05-08T15:44:26Z" | langchain/agents/load_tools.py | "ddg-search": (_get_ddg_search, []),
"google-serper": (_get_google_serper, ["serper_api_key", "aiosession"]),
"google-serper-results-json": (
_get_google_serper_results_json,
["serper_api_key", "aiosession"],
),
"serpapi": (_get_serpapi, ["serpapi_api_key", "aiosession"]),
"searx-search": (_get_searx_search, ["searx_host", "engines", "aiosession"]),
"wikipedia": (_get_wikipedia, ["top_k_results", "lang"]),
"arxiv": (
_get_arxiv,
["top_k_results", "load_max_docs", "load_all_available_meta"],
),
"human": (_get_human_tool, ["prompt_func", "input_func"]),
"awslambda": (
_get_lambda_api,
["awslambda_tool_name", "awslambda_tool_description", "function_name"],
),
"sceneXplain": (_get_scenexplain, []),
}
def load_tools(
tool_names: List[str],
llm: Optional[BaseLanguageModel] = None,
callback_manager: Optional[BaseCallbackManager] = None,
**kwargs: Any,
) -> List[BaseTool]:
"""Load tools based on their name.
Args:
tool_names: name of tools to load.
llm: Optional language model, may be needed to initialize certain tools. |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,096 | Callbacks are ignored when passed to load_tools | Hello,
I cannot figure out how to pass callback when using `load_tools`, I used to pass a callback_manager but I understand that it's now deprecated. I was able to reproduce with the following snippet:
```python
from langchain.agents import load_tools
from langchain.callbacks.base import BaseCallbackHandler
from langchain.tools import ShellTool
class MyCustomHandler(BaseCallbackHandler):
def on_tool_start(self, serialized, input_str: str, **kwargs):
"""Run when tool starts running."""
print("ON TOOL START!")
def on_tool_end(self, output: str, **kwargs):
"""Run when tool ends running."""
print("ON TOOL END!")
# load_tools doesn't works
print("LOAD TOOLS!")
tools = load_tools(["terminal"], callbacks=[MyCustomHandler()])
print(tools[0].run({"commands": ["echo 'Hello World!'", "time"]}))
# direct tool instantiation works
print("Direct tool")
shell_tool = ShellTool(callbacks=[MyCustomHandler()])
print(shell_tool.run({"commands": ["echo 'Hello World!'", "time"]}))
```
Here is the output I'm seeing:
```
LOAD TOOLS!
/home/lothiraldan/project/cometml/langchain/langchain/tools/shell/tool.py:33: UserWarning: The shell tool has no safeguards by default. Use at your own risk.
warnings.warn(
Hello World!
user 0m0,00s
sys 0m0,00s
Direct tool
ON TOOL START!
ON TOOL END!
Hello World!
user 0m0,00s
sys 0m0,00s
```
In this example, when I pass the callbacks to `load_tools`, the `on_tool_*` methods are not called. But maybe it's not the correct way to pass callbacks to the `load_tools` helper.
I reproduced with Langchain master, specifically the following commit https://github.com/hwchase17/langchain/commit/a9c24503309e2e3eb800f335e0fbc7c22531bda0.
Pip list output:
```
Package Version Editable project location
----------------------- --------- -------------------------------------------
aiohttp 3.8.4
aiosignal 1.3.1
async-timeout 4.0.2
attrs 23.1.0
certifi 2022.12.7
charset-normalizer 3.1.0
dataclasses-json 0.5.7
frozenlist 1.3.3
greenlet 2.0.2
idna 3.4
langchain 0.0.157 /home/lothiraldan/project/cometml/langchain
marshmallow 3.19.0
marshmallow-enum 1.5.1
multidict 6.0.4
mypy-extensions 1.0.0
numexpr 2.8.4
numpy 1.24.3
openai 0.27.6
openapi-schema-pydantic 1.2.4
packaging 23.1
pip 23.0.1
pydantic 1.10.7
PyYAML 6.0
requests 2.29.0
setuptools 67.6.1
SQLAlchemy 2.0.12
tenacity 8.2.2
tqdm 4.65.0
typing_extensions 4.5.0
typing-inspect 0.8.0
urllib3 1.26.15
wheel 0.40.0
yarl 1.9.2
``` | https://github.com/langchain-ai/langchain/issues/4096 | https://github.com/langchain-ai/langchain/pull/4298 | 0870a45a697a75ac839b724311ce7a8b59a09058 | 35c9e6ab407003e0c1f16fcf6d4c73f6637db731 | "2023-05-04T09:05:12Z" | python | "2023-05-08T15:44:26Z" | langchain/agents/load_tools.py | callback_manager: Optional callback manager. If not provided, default global callback manager will be used.
Returns:
List of tools.
"""
tools = []
for name in tool_names:
if name == "requests":
warnings.warn(
"tool name `requests` is deprecated - "
"please use `requests_all` or specify the requests method"
)
if name == "requests_all":
requests_method_tools = [
_tool for _tool in _BASE_TOOLS if _tool.startswith("requests_")
]
tool_names.extend(requests_method_tools)
elif name in _BASE_TOOLS:
tools.append(_BASE_TOOLS[name]())
elif name in _LLM_TOOLS:
if llm is None:
raise ValueError(f"Tool {name} requires an LLM to be provided")
tool = _LLM_TOOLS[name](llm)
if callback_manager is not None:
tool.callback_manager = callback_manager
tools.append(tool)
elif name in _EXTRA_LLM_TOOLS:
if llm is None:
raise ValueError(f"Tool {name} requires an LLM to be provided")
_get_llm_tool_func, extra_keys = _EXTRA_LLM_TOOLS[name] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,096 | Callbacks are ignored when passed to load_tools | Hello,
I cannot figure out how to pass callback when using `load_tools`, I used to pass a callback_manager but I understand that it's now deprecated. I was able to reproduce with the following snippet:
```python
from langchain.agents import load_tools
from langchain.callbacks.base import BaseCallbackHandler
from langchain.tools import ShellTool
class MyCustomHandler(BaseCallbackHandler):
def on_tool_start(self, serialized, input_str: str, **kwargs):
"""Run when tool starts running."""
print("ON TOOL START!")
def on_tool_end(self, output: str, **kwargs):
"""Run when tool ends running."""
print("ON TOOL END!")
# load_tools doesn't works
print("LOAD TOOLS!")
tools = load_tools(["terminal"], callbacks=[MyCustomHandler()])
print(tools[0].run({"commands": ["echo 'Hello World!'", "time"]}))
# direct tool instantiation works
print("Direct tool")
shell_tool = ShellTool(callbacks=[MyCustomHandler()])
print(shell_tool.run({"commands": ["echo 'Hello World!'", "time"]}))
```
Here is the output I'm seeing:
```
LOAD TOOLS!
/home/lothiraldan/project/cometml/langchain/langchain/tools/shell/tool.py:33: UserWarning: The shell tool has no safeguards by default. Use at your own risk.
warnings.warn(
Hello World!
user 0m0,00s
sys 0m0,00s
Direct tool
ON TOOL START!
ON TOOL END!
Hello World!
user 0m0,00s
sys 0m0,00s
```
In this example, when I pass the callbacks to `load_tools`, the `on_tool_*` methods are not called. But maybe it's not the correct way to pass callbacks to the `load_tools` helper.
I reproduced with Langchain master, specifically the following commit https://github.com/hwchase17/langchain/commit/a9c24503309e2e3eb800f335e0fbc7c22531bda0.
Pip list output:
```
Package Version Editable project location
----------------------- --------- -------------------------------------------
aiohttp 3.8.4
aiosignal 1.3.1
async-timeout 4.0.2
attrs 23.1.0
certifi 2022.12.7
charset-normalizer 3.1.0
dataclasses-json 0.5.7
frozenlist 1.3.3
greenlet 2.0.2
idna 3.4
langchain 0.0.157 /home/lothiraldan/project/cometml/langchain
marshmallow 3.19.0
marshmallow-enum 1.5.1
multidict 6.0.4
mypy-extensions 1.0.0
numexpr 2.8.4
numpy 1.24.3
openai 0.27.6
openapi-schema-pydantic 1.2.4
packaging 23.1
pip 23.0.1
pydantic 1.10.7
PyYAML 6.0
requests 2.29.0
setuptools 67.6.1
SQLAlchemy 2.0.12
tenacity 8.2.2
tqdm 4.65.0
typing_extensions 4.5.0
typing-inspect 0.8.0
urllib3 1.26.15
wheel 0.40.0
yarl 1.9.2
``` | https://github.com/langchain-ai/langchain/issues/4096 | https://github.com/langchain-ai/langchain/pull/4298 | 0870a45a697a75ac839b724311ce7a8b59a09058 | 35c9e6ab407003e0c1f16fcf6d4c73f6637db731 | "2023-05-04T09:05:12Z" | python | "2023-05-08T15:44:26Z" | langchain/agents/load_tools.py | missing_keys = set(extra_keys).difference(kwargs)
if missing_keys:
raise ValueError(
f"Tool {name} requires some parameters that were not "
f"provided: {missing_keys}"
)
sub_kwargs = {k: kwargs[k] for k in extra_keys}
tool = _get_llm_tool_func(llm=llm, **sub_kwargs)
if callback_manager is not None:
tool.callback_manager = callback_manager
tools.append(tool)
elif name in _EXTRA_OPTIONAL_TOOLS:
_get_tool_func, extra_keys = _EXTRA_OPTIONAL_TOOLS[name]
sub_kwargs = {k: kwargs[k] for k in extra_keys if k in kwargs}
tool = _get_tool_func(**sub_kwargs)
if callback_manager is not None:
tool.callback_manager = callback_manager
tools.append(tool)
else:
raise ValueError(f"Got unknown tool {name}")
return tools
def get_all_tool_names() -> List[str]:
"""Get a list of all possible tool names."""
return (
list(_BASE_TOOLS)
+ list(_EXTRA_OPTIONAL_TOOLS)
+ list(_EXTRA_LLM_TOOLS)
+ list(_LLM_TOOLS)
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,096 | Callbacks are ignored when passed to load_tools | Hello,
I cannot figure out how to pass callback when using `load_tools`, I used to pass a callback_manager but I understand that it's now deprecated. I was able to reproduce with the following snippet:
```python
from langchain.agents import load_tools
from langchain.callbacks.base import BaseCallbackHandler
from langchain.tools import ShellTool
class MyCustomHandler(BaseCallbackHandler):
def on_tool_start(self, serialized, input_str: str, **kwargs):
"""Run when tool starts running."""
print("ON TOOL START!")
def on_tool_end(self, output: str, **kwargs):
"""Run when tool ends running."""
print("ON TOOL END!")
# load_tools doesn't works
print("LOAD TOOLS!")
tools = load_tools(["terminal"], callbacks=[MyCustomHandler()])
print(tools[0].run({"commands": ["echo 'Hello World!'", "time"]}))
# direct tool instantiation works
print("Direct tool")
shell_tool = ShellTool(callbacks=[MyCustomHandler()])
print(shell_tool.run({"commands": ["echo 'Hello World!'", "time"]}))
```
Here is the output I'm seeing:
```
LOAD TOOLS!
/home/lothiraldan/project/cometml/langchain/langchain/tools/shell/tool.py:33: UserWarning: The shell tool has no safeguards by default. Use at your own risk.
warnings.warn(
Hello World!
user 0m0,00s
sys 0m0,00s
Direct tool
ON TOOL START!
ON TOOL END!
Hello World!
user 0m0,00s
sys 0m0,00s
```
In this example, when I pass the callbacks to `load_tools`, the `on_tool_*` methods are not called. But maybe it's not the correct way to pass callbacks to the `load_tools` helper.
I reproduced with Langchain master, specifically the following commit https://github.com/hwchase17/langchain/commit/a9c24503309e2e3eb800f335e0fbc7c22531bda0.
Pip list output:
```
Package Version Editable project location
----------------------- --------- -------------------------------------------
aiohttp 3.8.4
aiosignal 1.3.1
async-timeout 4.0.2
attrs 23.1.0
certifi 2022.12.7
charset-normalizer 3.1.0
dataclasses-json 0.5.7
frozenlist 1.3.3
greenlet 2.0.2
idna 3.4
langchain 0.0.157 /home/lothiraldan/project/cometml/langchain
marshmallow 3.19.0
marshmallow-enum 1.5.1
multidict 6.0.4
mypy-extensions 1.0.0
numexpr 2.8.4
numpy 1.24.3
openai 0.27.6
openapi-schema-pydantic 1.2.4
packaging 23.1
pip 23.0.1
pydantic 1.10.7
PyYAML 6.0
requests 2.29.0
setuptools 67.6.1
SQLAlchemy 2.0.12
tenacity 8.2.2
tqdm 4.65.0
typing_extensions 4.5.0
typing-inspect 0.8.0
urllib3 1.26.15
wheel 0.40.0
yarl 1.9.2
``` | https://github.com/langchain-ai/langchain/issues/4096 | https://github.com/langchain-ai/langchain/pull/4298 | 0870a45a697a75ac839b724311ce7a8b59a09058 | 35c9e6ab407003e0c1f16fcf6d4c73f6637db731 | "2023-05-04T09:05:12Z" | python | "2023-05-08T15:44:26Z" | tests/unit_tests/agents/test_tools.py | """Test tool utils."""
from typing import Any, Type
from unittest.mock import MagicMock
import pytest
from langchain.agents.agent import Agent
from langchain.agents.chat.base import ChatAgent
from langchain.agents.conversational.base import ConversationalAgent
from langchain.agents.conversational_chat.base import ConversationalChatAgent
from langchain.agents.mrkl.base import ZeroShotAgent
from langchain.agents.react.base import ReActDocstoreAgent, ReActTextWorldAgent
from langchain.agents.self_ask_with_search.base import SelfAskWithSearchAgent
from langchain.agents.tools import Tool, tool
@pytest.mark.parametrize(
"agent_cls",
[
ZeroShotAgent,
ChatAgent,
ConversationalChatAgent,
ConversationalAgent,
ReActDocstoreAgent,
ReActTextWorldAgent,
SelfAskWithSearchAgent,
],
)
def test_single_input_agent_raises_error_on_structured_tool( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,096 | Callbacks are ignored when passed to load_tools | Hello,
I cannot figure out how to pass callback when using `load_tools`, I used to pass a callback_manager but I understand that it's now deprecated. I was able to reproduce with the following snippet:
```python
from langchain.agents import load_tools
from langchain.callbacks.base import BaseCallbackHandler
from langchain.tools import ShellTool
class MyCustomHandler(BaseCallbackHandler):
def on_tool_start(self, serialized, input_str: str, **kwargs):
"""Run when tool starts running."""
print("ON TOOL START!")
def on_tool_end(self, output: str, **kwargs):
"""Run when tool ends running."""
print("ON TOOL END!")
# load_tools doesn't works
print("LOAD TOOLS!")
tools = load_tools(["terminal"], callbacks=[MyCustomHandler()])
print(tools[0].run({"commands": ["echo 'Hello World!'", "time"]}))
# direct tool instantiation works
print("Direct tool")
shell_tool = ShellTool(callbacks=[MyCustomHandler()])
print(shell_tool.run({"commands": ["echo 'Hello World!'", "time"]}))
```
Here is the output I'm seeing:
```
LOAD TOOLS!
/home/lothiraldan/project/cometml/langchain/langchain/tools/shell/tool.py:33: UserWarning: The shell tool has no safeguards by default. Use at your own risk.
warnings.warn(
Hello World!
user 0m0,00s
sys 0m0,00s
Direct tool
ON TOOL START!
ON TOOL END!
Hello World!
user 0m0,00s
sys 0m0,00s
```
In this example, when I pass the callbacks to `load_tools`, the `on_tool_*` methods are not called. But maybe it's not the correct way to pass callbacks to the `load_tools` helper.
I reproduced with Langchain master, specifically the following commit https://github.com/hwchase17/langchain/commit/a9c24503309e2e3eb800f335e0fbc7c22531bda0.
Pip list output:
```
Package Version Editable project location
----------------------- --------- -------------------------------------------
aiohttp 3.8.4
aiosignal 1.3.1
async-timeout 4.0.2
attrs 23.1.0
certifi 2022.12.7
charset-normalizer 3.1.0
dataclasses-json 0.5.7
frozenlist 1.3.3
greenlet 2.0.2
idna 3.4
langchain 0.0.157 /home/lothiraldan/project/cometml/langchain
marshmallow 3.19.0
marshmallow-enum 1.5.1
multidict 6.0.4
mypy-extensions 1.0.0
numexpr 2.8.4
numpy 1.24.3
openai 0.27.6
openapi-schema-pydantic 1.2.4
packaging 23.1
pip 23.0.1
pydantic 1.10.7
PyYAML 6.0
requests 2.29.0
setuptools 67.6.1
SQLAlchemy 2.0.12
tenacity 8.2.2
tqdm 4.65.0
typing_extensions 4.5.0
typing-inspect 0.8.0
urllib3 1.26.15
wheel 0.40.0
yarl 1.9.2
``` | https://github.com/langchain-ai/langchain/issues/4096 | https://github.com/langchain-ai/langchain/pull/4298 | 0870a45a697a75ac839b724311ce7a8b59a09058 | 35c9e6ab407003e0c1f16fcf6d4c73f6637db731 | "2023-05-04T09:05:12Z" | python | "2023-05-08T15:44:26Z" | tests/unit_tests/agents/test_tools.py | agent_cls: Type[Agent],
) -> None:
"""Test that older agents raise errors on older tools."""
@tool
def the_tool(foo: str, bar: str) -> str:
"""Return the concat of foo and bar."""
return foo + bar
with pytest.raises(
ValueError,
match=f"{agent_cls.__name__} does not support"
f" multi-input tool {the_tool.name}.",
):
agent_cls.from_llm_and_tools(MagicMock(), [the_tool])
def test_tool_no_args_specified_assumes_str() -> None:
"""Older tools could assume *args and **kwargs were passed in."""
def ambiguous_function(*args: Any, **kwargs: Any) -> str:
"""An ambiguously defined function."""
return args[0]
some_tool = Tool(
name="chain_run",
description="Run the chain",
func=ambiguous_function,
)
expected_args = {"tool_input": {"type": "string"}}
assert some_tool.args == expected_args
assert some_tool.run("foobar") == "foobar"
assert some_tool.run({"tool_input": "foobar"}) == "foobar"
with pytest.raises(ValueError, match="Too many arguments to single-input tool"):
some_tool.run({"tool_input": "foobar", "other_input": "bar"}) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/chat_models/openai.py | """OpenAI chat wrapper."""
from __future__ import annotations
import logging
import sys
from typing import Any, Callable, Dict, List, Mapping, Optional, Tuple, Union
from pydantic import Extra, Field, root_validator
from tenacity import (
before_sleep_log,
retry,
retry_if_exception_type,
stop_after_attempt,
wait_exponential,
)
from langchain.callbacks.manager import (
AsyncCallbackManagerForLLMRun,
CallbackManagerForLLMRun,
)
from langchain.chat_models.base import BaseChatModel
from langchain.schema import (
AIMessage,
BaseMessage,
ChatGeneration,
ChatMessage,
ChatResult,
HumanMessage,
SystemMessage,
)
from langchain.utils import get_from_dict_or_env
logger = logging.getLogger(__name__)
def _create_retry_decorator(llm: ChatOpenAI) -> Callable[[Any], Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/chat_models/openai.py | import openai
min_seconds = 1
max_seconds = 60
return retry(
reraise=True,
stop=stop_after_attempt(llm.max_retries),
wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),
retry=(
retry_if_exception_type(openai.error.Timeout)
| retry_if_exception_type(openai.error.APIError)
| retry_if_exception_type(openai.error.APIConnectionError)
| retry_if_exception_type(openai.error.RateLimitError)
| retry_if_exception_type(openai.error.ServiceUnavailableError)
),
before_sleep=before_sleep_log(logger, logging.WARNING),
)
async def acompletion_with_retry(llm: ChatOpenAI, **kwargs: Any) -> Any:
"""Use tenacity to retry the async completion call."""
retry_decorator = _create_retry_decorator(llm)
@retry_decorator
async def _completion_with_retry(**kwargs: Any) -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/chat_models/openai.py | return await llm.client.acreate(**kwargs)
return await _completion_with_retry(**kwargs)
def _convert_dict_to_message(_dict: dict) -> BaseMessage:
role = _dict["role"]
if role == "user":
return HumanMessage(content=_dict["content"])
elif role == "assistant":
return AIMessage(content=_dict["content"])
elif role == "system":
return SystemMessage(content=_dict["content"])
else:
return ChatMessage(content=_dict["content"], role=role)
def _convert_message_to_dict(message: BaseMessage) -> dict:
if isinstance(message, ChatMessage):
message_dict = {"role": message.role, "content": message.content}
elif isinstance(message, HumanMessage):
message_dict = {"role": "user", "content": message.content}
elif isinstance(message, AIMessage):
message_dict = {"role": "assistant", "content": message.content}
elif isinstance(message, SystemMessage):
message_dict = {"role": "system", "content": message.content}
else:
raise ValueError(f"Got unknown type {message}")
if "name" in message.additional_kwargs:
message_dict["name"] = message.additional_kwargs["name"]
return message_dict
class ChatOpenAI(BaseChatModel): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/chat_models/openai.py | """Wrapper around OpenAI Chat large language models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from langchain.chat_models import ChatOpenAI
openai = ChatOpenAI(model_name="gpt-3.5-turbo")
"""
client: Any
model_name: str = "gpt-3.5-turbo"
"""Model name to use."""
temperature: float = 0.7
"""What sampling temperature to use."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for `create` call not explicitly specified."""
openai_api_key: Optional[str] = None
openai_organization: Optional[str] = None
request_timeout: Optional[Union[float, Tuple[float, float]]] = None
"""Timeout for requests to OpenAI completion API. Default is 600 seconds."""
max_retries: int = 6
"""Maximum number of retries to make when generating."""
streaming: bool = False
"""Whether to stream the results or not."""
n: int = 1
"""Number of chat completions to generate for each prompt."""
max_tokens: Optional[int] = None
"""Maximum number of tokens to generate."""
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/chat_models/openai.py | """Configuration for this pydantic object."""
extra = Extra.ignore
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:
"""Build extra kwargs from additional params that were passed in."""
all_required_field_names = {field.alias for field in cls.__fields__.values()}
extra = values.get("model_kwargs", {})
for field_name in list(values):
if field_name not in all_required_field_names:
if field_name in extra:
raise ValueError(f"Found {field_name} supplied twice.")
extra[field_name] = values.pop(field_name)
values["model_kwargs"] = extra
return values
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
openai_api_key = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
openai_organization = get_from_dict_or_env(
values,
"openai_organization",
"OPENAI_ORGANIZATION",
default="",
)
openai_api_base = get_from_dict_or_env(
values,
"openai_api_base", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/chat_models/openai.py | "OPENAI_API_BASE",
default="",
)
try:
import openai
except ImportError:
raise ValueError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
openai.api_key = openai_api_key
if openai_organization:
openai.organization = openai_organization
if openai_api_base:
openai.api_base = openai_api_base
try:
values["client"] = openai.ChatCompletion
except AttributeError:
raise ValueError(
"`openai` has no `ChatCompletion` attribute, this is likely "
"due to an old version of the openai package. Try upgrading it "
"with `pip install --upgrade openai`."
)
if values["n"] < 1:
raise ValueError("n must be at least 1.")
if values["n"] > 1 and values["streaming"]:
raise ValueError("n must be 1 when streaming.")
return values
@property
def _default_params(self) -> Dict[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/chat_models/openai.py | """Get the default parameters for calling OpenAI API."""
return {
"model": self.model_name,
"request_timeout": self.request_timeout,
"max_tokens": self.max_tokens,
"stream": self.streaming,
"n": self.n,
"temperature": self.temperature,
**self.model_kwargs,
}
def _create_retry_decorator(self) -> Callable[[Any], Any]:
import openai
min_seconds = 1
max_seconds = 60
return retry(
reraise=True,
stop=stop_after_attempt(self.max_retries),
wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),
retry=(
retry_if_exception_type(openai.error.Timeout)
| retry_if_exception_type(openai.error.APIError)
| retry_if_exception_type(openai.error.APIConnectionError)
| retry_if_exception_type(openai.error.RateLimitError)
| retry_if_exception_type(openai.error.ServiceUnavailableError)
),
before_sleep=before_sleep_log(logger, logging.WARNING),
)
def completion_with_retry(self, **kwargs: Any) -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/chat_models/openai.py | """Use tenacity to retry the completion call."""
retry_decorator = self._create_retry_decorator()
@retry_decorator
def _completion_with_retry(**kwargs: Any) -> Any:
return self.client.create(**kwargs)
return _completion_with_retry(**kwargs)
def _combine_llm_outputs(self, llm_outputs: List[Optional[dict]]) -> dict:
overall_token_usage: dict = {}
for output in llm_outputs:
if output is None:
continue
token_usage = output["token_usage"]
for k, v in token_usage.items():
if k in overall_token_usage:
overall_token_usage[k] += v
else:
overall_token_usage[k] = v
return {"token_usage": overall_token_usage, "model_name": self.model_name}
def _generate( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/chat_models/openai.py | self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> ChatResult:
message_dicts, params = self._create_message_dicts(messages, stop)
if self.streaming:
inner_completion = ""
role = "assistant"
params["stream"] = True
for stream_resp in self.completion_with_retry(
messages=message_dicts, **params
):
role = stream_resp["choices"][0]["delta"].get("role", role)
token = stream_resp["choices"][0]["delta"].get("content", "")
inner_completion += token
if run_manager:
run_manager.on_llm_new_token(token)
message = _convert_dict_to_message(
{"content": inner_completion, "role": role}
)
return ChatResult(generations=[ChatGeneration(message=message)])
response = self.completion_with_retry(messages=message_dicts, **params)
return self._create_chat_result(response)
def _create_message_dicts( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/chat_models/openai.py | self, messages: List[BaseMessage], stop: Optional[List[str]]
) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]:
params: Dict[str, Any] = {**{"model": self.model_name}, **self._default_params}
if stop is not None:
if "stop" in params:
raise ValueError("`stop` found in both the input and default params.")
params["stop"] = stop
message_dicts = [_convert_message_to_dict(m) for m in messages]
return message_dicts, params
def _create_chat_result(self, response: Mapping[str, Any]) -> ChatResult:
generations = []
for res in response["choices"]:
message = _convert_dict_to_message(res["message"])
gen = ChatGeneration(message=message)
generations.append(gen)
llm_output = {"token_usage": response["usage"], "model_name": self.model_name}
return ChatResult(generations=generations, llm_output=llm_output)
async def _agenerate( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/chat_models/openai.py | self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
) -> ChatResult:
message_dicts, params = self._create_message_dicts(messages, stop)
if self.streaming:
inner_completion = ""
role = "assistant"
params["stream"] = True
async for stream_resp in await acompletion_with_retry(
self, messages=message_dicts, **params
):
role = stream_resp["choices"][0]["delta"].get("role", role)
token = stream_resp["choices"][0]["delta"].get("content", "")
inner_completion += token
if run_manager:
await run_manager.on_llm_new_token(token)
message = _convert_dict_to_message(
{"content": inner_completion, "role": role}
)
return ChatResult(generations=[ChatGeneration(message=message)])
else:
response = await acompletion_with_retry(
self, messages=message_dicts, **params
)
return self._create_chat_result(response)
@property
def _identifying_params(self) -> Mapping[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/chat_models/openai.py | """Get the identifying parameters."""
return {**{"model_name": self.model_name}, **self._default_params}
def get_num_tokens(self, text: str) -> int:
"""Calculate num tokens with tiktoken package."""
if sys.version_info[1] <= 7:
return super().get_num_tokens(text)
try:
import tiktoken
except ImportError:
raise ValueError(
"Could not import tiktoken python package. "
"This is needed in order to calculate get_num_tokens. "
"Please install it with `pip install tiktoken`."
)
enc = tiktoken.encoding_for_model(self.model_name)
tokenized_text = enc.encode(text)
return len(tokenized_text)
def get_num_tokens_from_messages(self, messages: List[BaseMessage]) -> int: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/chat_models/openai.py | """Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.
Official documentation: https://github.com/openai/openai-cookbook/blob/
main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb"""
try:
import tiktoken
except ImportError:
raise ValueError(
"Could not import tiktoken python package. "
"This is needed in order to calculate get_num_tokens. "
"Please install it with `pip install tiktoken`."
)
model = self.model_name
if model == "gpt-3.5-turbo":
model = "gpt-3.5-turbo-0301"
elif model == "gpt-4":
model = "gpt-4-0314"
try:
encoding = tiktoken.encoding_for_model(model) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/chat_models/openai.py | except KeyError:
logger.warning("Warning: model not found. Using cl100k_base encoding.")
encoding = tiktoken.get_encoding("cl100k_base")
if model == "gpt-3.5-turbo-0301":
tokens_per_message = 4
tokens_per_name = -1
elif model == "gpt-4-0314":
tokens_per_message = 3
tokens_per_name = 1
else:
raise NotImplementedError(
f"get_num_tokens_from_messages() is not presently implemented "
f"for model {model}."
"See https://github.com/openai/openai-python/blob/main/chatml.md for "
"information on how messages are converted to tokens."
)
num_tokens = 0
messages_dict = [_convert_message_to_dict(m) for m in messages]
for message in messages_dict:
num_tokens += tokens_per_message
for key, value in message.items():
num_tokens += len(encoding.encode(value))
if key == "name":
num_tokens += tokens_per_name
num_tokens += 3
return num_tokens |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | """Wrapper around OpenAI APIs."""
from __future__ import annotations
import logging
import sys
import warnings
from typing import (
AbstractSet,
Any,
Callable,
Collection,
Dict,
Generator,
List,
Literal,
Mapping,
Optional,
Set,
Tuple,
Union,
)
from pydantic import Extra, Field, root_validator
from tenacity import (
before_sleep_log,
retry, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | retry_if_exception_type,
stop_after_attempt,
wait_exponential,
)
from langchain.callbacks.manager import (
AsyncCallbackManagerForLLMRun,
CallbackManagerForLLMRun,
)
from langchain.llms.base import BaseLLM
from langchain.schema import Generation, LLMResult
from langchain.utils import get_from_dict_or_env
logger = logging.getLogger(__name__)
def update_token_usage(
keys: Set[str], response: Dict[str, Any], token_usage: Dict[str, Any]
) -> None:
"""Update token usage."""
_keys_to_use = keys.intersection(response["usage"])
for _key in _keys_to_use:
if _key not in token_usage:
token_usage[_key] = response["usage"][_key]
else:
token_usage[_key] += response["usage"][_key]
def _update_response(response: Dict[str, Any], stream_response: Dict[str, Any]) -> None:
"""Update response from the stream response."""
response["choices"][0]["text"] += stream_response["choices"][0]["text"]
response["choices"][0]["finish_reason"] = stream_response["choices"][0][
"finish_reason"
]
response["choices"][0]["logprobs"] = stream_response["choices"][0]["logprobs"]
def _streaming_response_template() -> Dict[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | return {
"choices": [
{
"text": "",
"finish_reason": None,
"logprobs": None,
}
]
}
def _create_retry_decorator(llm: Union[BaseOpenAI, OpenAIChat]) -> Callable[[Any], Any]:
import openai
min_seconds = 4
max_seconds = 10
return retry(
reraise=True,
stop=stop_after_attempt(llm.max_retries),
wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),
retry=(
retry_if_exception_type(openai.error.Timeout)
| retry_if_exception_type(openai.error.APIError)
| retry_if_exception_type(openai.error.APIConnectionError)
| retry_if_exception_type(openai.error.RateLimitError)
| retry_if_exception_type(openai.error.ServiceUnavailableError)
),
before_sleep=before_sleep_log(logger, logging.WARNING),
)
def completion_with_retry(llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any) -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | """Use tenacity to retry the completion call."""
retry_decorator = _create_retry_decorator(llm)
@retry_decorator
def _completion_with_retry(**kwargs: Any) -> Any:
return llm.client.create(**kwargs)
return _completion_with_retry(**kwargs)
async def acompletion_with_retry(
llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any
) -> Any:
"""Use tenacity to retry the async completion call."""
retry_decorator = _create_retry_decorator(llm)
@retry_decorator
async def _completion_with_retry(**kwargs: Any) -> Any:
return await llm.client.acreate(**kwargs)
return await _completion_with_retry(**kwargs)
class BaseOpenAI(BaseLLM): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | """Wrapper around OpenAI large language models."""
client: Any
model_name: str = "text-davinci-003"
"""Model name to use."""
temperature: float = 0.7
"""What sampling temperature to use."""
max_tokens: int = 256
"""The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size."""
top_p: float = 1
"""Total probability mass of tokens to consider at each step."""
frequency_penalty: float = 0
"""Penalizes repeated tokens according to frequency."""
presence_penalty: float = 0
"""Penalizes repeated tokens."""
n: int = 1
"""How many completions to generate for each prompt."""
best_of: int = 1
"""Generates best_of completions server-side and returns the "best"."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | """Holds any model parameters valid for `create` call not explicitly specified."""
openai_api_key: Optional[str] = None
openai_api_base: Optional[str] = None
openai_organization: Optional[str] = None
batch_size: int = 20
"""Batch size to use when passing multiple documents to generate."""
request_timeout: Optional[Union[float, Tuple[float, float]]] = None
"""Timeout for requests to OpenAI completion API. Default is 600 seconds."""
logit_bias: Optional[Dict[str, float]] = Field(default_factory=dict)
"""Adjust the probability of specific tokens being generated."""
max_retries: int = 6
"""Maximum number of retries to make when generating."""
streaming: bool = False
"""Whether to stream the results or not."""
allowed_special: Union[Literal["all"], AbstractSet[str]] = set()
"""Set of special tokens that are allowed。"""
disallowed_special: Union[Literal["all"], Collection[str]] = "all"
"""Set of special tokens that are not allowed。"""
def __new__(cls, **data: Any) -> Union[OpenAIChat, BaseOpenAI]: # ty
"""Initialize the OpenAI object."""
model_name = data.get("model_name", "")
if model_name.startswith("gpt-3.5-turbo") or model_name.startswith("gpt-4"):
warnings.warn(
"You are trying to use a chat model. This way of initializing it is "
"no longer supported. Instead, please use: "
"`from langchain.chat_models import ChatOpenAI`"
)
return OpenAIChat(**data)
return super().__new__(cls)
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | """Configuration for this pydantic object."""
extra = Extra.ignore
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:
"""Build extra kwargs from additional params that were passed in."""
all_required_field_names = {field.alias for field in cls.__fields__.values()}
extra = values.get("model_kwargs", {})
for field_name in list(values):
if field_name not in all_required_field_names:
if field_name in extra:
raise ValueError(f"Found {field_name} supplied twice.")
logger.warning(
f"""WARNING! {field_name} is not default parameter.
{field_name} was transfered to model_kwargs.
Please confirm that {field_name} is what you intended."""
)
extra[field_name] = values.pop(field_name)
values["model_kwargs"] = extra
return values
@root_validator()
def validate_environment(cls, values: Dict) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | """Validate that api key and python package exists in environment."""
openai_api_key = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
openai_api_base = get_from_dict_or_env(
values,
"openai_api_base",
"OPENAI_API_BASE",
default="",
)
openai_organization = get_from_dict_or_env(
values,
"openai_organization",
"OPENAI_ORGANIZATION",
default="",
)
try:
import openai
openai.api_key = openai_api_key
if openai_api_base:
openai.api_base = openai_api_base
if openai_organization:
openai.organization = openai_organization |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | values["client"] = openai.Completion
except ImportError:
raise ValueError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
if values["streaming"] and values["n"] > 1:
raise ValueError("Cannot stream results when n > 1.")
if values["streaming"] and values["best_of"] > 1:
raise ValueError("Cannot stream results when best_of > 1.")
return values
@property
def _default_params(self) -> Dict[str, Any]:
"""Get the default parameters for calling OpenAI API."""
normal_params = {
"temperature": self.temperature,
"max_tokens": self.max_tokens,
"top_p": self.top_p,
"frequency_penalty": self.frequency_penalty,
"presence_penalty": self.presence_penalty,
"n": self.n,
"request_timeout": self.request_timeout,
"logit_bias": self.logit_bias,
}
# Az
# do
if self.best_of > 1:
normal_params["best_of"] = self.best_of
return {**normal_params, **self.model_kwargs}
def _generate( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> LLMResult:
"""Call out to OpenAI's endpoint with k unique prompts.
Args:
prompts: The prompts to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:
The full LLM output.
Example:
.. code-block:: python
response = openai.generate(["Tell me a joke."])
"""
# TO
params = self._invocation_params
sub_prompts = self.get_sub_prompts(params, prompts, stop) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | choices = []
token_usage: Dict[str, int] = {}
# Ge
# In
_keys = {"completion_tokens", "prompt_tokens", "total_tokens"}
for _prompts in sub_prompts:
if self.streaming:
if len(_prompts) > 1:
raise ValueError("Cannot stream results with multiple prompts.")
params["stream"] = True
response = _streaming_response_template()
for stream_resp in completion_with_retry(
self, prompt=_prompts, **params
):
if run_manager:
run_manager.on_llm_new_token(
stream_resp["choices"][0]["text"],
verbose=self.verbose,
logprobs=stream_resp["choices"][0]["logprobs"],
)
_update_response(response, stream_resp)
choices.extend(response["choices"])
else:
response = completion_with_retry(self, prompt=_prompts, **params)
choices.extend(response["choices"])
if not self.streaming:
# Ca
update_token_usage(_keys, response, token_usage)
return self.create_llm_result(choices, prompts, token_usage)
async def _agenerate( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
) -> LLMResult:
"""Call out to OpenAI's endpoint async with k unique prompts."""
params = self._invocation_params
sub_prompts = self.get_sub_prompts(params, prompts, stop)
choices = []
token_usage: Dict[str, int] = {}
# Ge
# In
_keys = {"completion_tokens", "prompt_tokens", "total_tokens"}
for _prompts in sub_prompts:
if self.streaming:
if len(_prompts) > 1:
raise ValueError("Cannot stream results with multiple prompts.")
params["stream"] = True
response = _streaming_response_template()
async for stream_resp in await acompletion_with_retry(
self, prompt=_prompts, **params
):
if run_manager:
await run_manager.on_llm_new_token(
stream_resp["choices"][0]["text"],
verbose=self.verbose,
logprobs=stream_resp["choices"][0]["logprobs"],
)
_update_response(response, stream_resp)
choices.extend(response["choices"]) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | else:
response = await acompletion_with_retry(self, prompt=_prompts, **params)
choices.extend(response["choices"])
if not self.streaming:
# Ca
update_token_usage(_keys, response, token_usage)
return self.create_llm_result(choices, prompts, token_usage)
def get_sub_prompts(
self,
params: Dict[str, Any],
prompts: List[str],
stop: Optional[List[str]] = None,
) -> List[List[str]]:
"""Get the sub prompts for llm call."""
if stop is not None:
if "stop" in params:
raise ValueError("`stop` found in both the input and default params.")
params["stop"] = stop
if params["max_tokens"] == -1:
if len(prompts) != 1:
raise ValueError(
"max_tokens set to -1 not supported for multiple inputs."
)
params["max_tokens"] = self.max_tokens_for_prompt(prompts[0])
sub_prompts = [
prompts[i : i + self.batch_size]
for i in range(0, len(prompts), self.batch_size)
]
return sub_prompts
def create_llm_result( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | self, choices: Any, prompts: List[str], token_usage: Dict[str, int]
) -> LLMResult:
"""Create the LLMResult from the choices and prompts."""
generations = []
for i, _ in enumerate(prompts):
sub_choices = choices[i * self.n : (i + 1) * self.n]
generations.append(
[
Generation(
text=choice["text"],
generation_info=dict(
finish_reason=choice.get("finish_reason"),
logprobs=choice.get("logprobs"),
),
)
for choice in sub_choices
]
)
llm_output = {"token_usage": token_usage, "model_name": self.model_name}
return LLMResult(generations=generations, llm_output=llm_output)
def stream(self, prompt: str, stop: Optional[List[str]] = None) -> Generator: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | """Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Args:
prompt: The prompts to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:
A generator representing the stream of tokens from OpenAI.
Example:
.. code-block:: python
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
"""
params = self.prep_streaming_params(stop)
generator = self.client.create(prompt=prompt, **params)
return generator
def prep_streaming_params(self, stop: Optional[List[str]] = None) -> Dict[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | """Prepare the params for streaming."""
params = self._invocation_params
if params["best_of"] != 1:
raise ValueError("OpenAI only supports best_of == 1 for streaming")
if stop is not None:
if "stop" in params:
raise ValueError("`stop` found in both the input and default params.")
params["stop"] = stop
params["stream"] = True
return params
@property
def _invocation_params(self) -> Dict[str, Any]:
"""Get the parameters used to invoke the model."""
return self._default_params
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {**{"model_name": self.model_name}, **self._default_params}
@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "openai"
def get_num_tokens(self, text: str) -> int: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | """Calculate num tokens with tiktoken package."""
# ti
if sys.version_info[1] < 8:
return super().get_num_tokens(text)
try:
import tiktoken
except ImportError:
raise ValueError(
"Could not import tiktoken python package. "
"This is needed in order to calculate get_num_tokens. "
"Please install it with `pip install tiktoken`."
)
enc = tiktoken.encoding_for_model(self.model_name)
tokenized_text = enc.encode(
text,
allowed_special=self.allowed_special,
disallowed_special=self.disallowed_special,
)
# ca
return len(tokenized_text)
def modelname_to_contextsize(self, modelname: str) -> int:
"""Calculate the maximum number of tokens possible to generate for a model.
Args:
modelname: The modelname we want to know the context size for.
Returns:
The maximum context size
Example:
.. code-block:: python
max_tokens = openai.modelname_to_contextsize("text-davinci-003") |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | """
model_token_mapping = {
"gpt-4": 8192,
"gpt-4-0314": 8192,
"gpt-4-32k": 32768,
"gpt-4-32k-0314": 32768,
"gpt-3.5-turbo": 4096,
"gpt-3.5-turbo-0301": 4096,
"text-ada-001": 2049,
"ada": 2049,
"text-babbage-001": 2040,
"babbage": 2049,
"text-curie-001": 2049,
"curie": 2049,
"davinci": 2049,
"text-davinci-003": 4097,
"text-davinci-002": 4097,
"code-davinci-002": 8001,
"code-davinci-001": 8001,
"code-cushman-002": 2048,
"code-cushman-001": 2048,
}
context_size = model_token_mapping.get(modelname, None)
if context_size is None:
raise ValueError(
f"Unknown model: {modelname}. Please provide a valid OpenAI model name."
"Known models are: " + ", ".join(model_token_mapping.keys())
)
return context_size
def max_tokens_for_prompt(self, prompt: str) -> int: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | """Calculate the maximum number of tokens possible to generate for a prompt.
Args:
prompt: The prompt to pass into the model.
Returns:
The maximum number of tokens to generate for a prompt.
Example:
.. code-block:: python
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
"""
num_tokens = self.get_num_tokens(prompt)
# ge
max_size = self.modelname_to_contextsize(self.model_name)
return max_size - num_tokens
class OpenAI(BaseOpenAI):
"""Wrapper around OpenAI large language models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from langchain.llms import OpenAI
openai = OpenAI(model_name="text-davinci-003")
"""
@property
def _invocation_params(self) -> Dict[str, Any]:
return {**{"model": self.model_name}, **super()._invocation_params}
class AzureOpenAI(BaseOpenAI): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | """Wrapper around Azure-specific OpenAI large language models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from langchain.llms import AzureOpenAI
openai = AzureOpenAI(model_name="text-davinci-003")
"""
deployment_name: str = ""
"""Deployment name to use."""
@property
def _identifying_params(self) -> Mapping[str, Any]:
return {
**{"deployment_name": self.deployment_name},
**super()._identifying_params,
}
@property
def _invocation_params(self) -> Dict[str, Any]:
return {**{"engine": self.deployment_name}, **super()._invocation_params}
@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "azure"
class OpenAIChat(BaseLLM): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | """Wrapper around OpenAI Chat large language models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from langchain.llms import OpenAIChat
openaichat = OpenAIChat(model_name="gpt-3.5-turbo")
"""
client: Any
model_name: str = "gpt-3.5-turbo"
"""Model name to use."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for `create` call not explicitly specified."""
openai_api_key: Optional[str] = None
openai_api_base: Optional[str] = None
max_retries: int = 6
"""Maximum number of retries to make when generating."""
prefix_messages: List = Field(default_factory=list)
"""Series of messages for Chat input."""
streaming: bool = False
"""Whether to stream the results or not."""
allowed_special: Union[Literal["all"], AbstractSet[str]] = set()
"""Set of special tokens that are allowed。"""
disallowed_special: Union[Literal["all"], Collection[str]] = "all"
"""Set of special tokens that are not allowed。"""
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | """Configuration for this pydantic object."""
extra = Extra.ignore
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:
"""Build extra kwargs from additional params that were passed in."""
all_required_field_names = {field.alias for field in cls.__fields__.values()}
extra = values.get("model_kwargs", {})
for field_name in list(values):
if field_name not in all_required_field_names:
if field_name in extra:
raise ValueError(f"Found {field_name} supplied twice.")
extra[field_name] = values.pop(field_name)
values["model_kwargs"] = extra
return values
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
openai_api_key = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
openai_api_base = get_from_dict_or_env(
values,
"openai_api_base",
"OPENAI_API_BASE",
default="",
)
openai_organization = get_from_dict_or_env( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | values, "openai_organization", "OPENAI_ORGANIZATION", default=""
)
try:
import openai
openai.api_key = openai_api_key
if openai_api_base:
openai.api_base = openai_api_base
if openai_organization:
openai.organization = openai_organization
except ImportError:
raise ValueError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
try:
values["client"] = openai.ChatCompletion
except AttributeError:
raise ValueError(
"`openai` has no `ChatCompletion` attribute, this is likely "
"due to an old version of the openai package. Try upgrading it "
"with `pip install --upgrade openai`."
)
warnings.warn(
"You are trying to use a chat model. This way of initializing it is "
"no longer supported. Instead, please use: "
"`from langchain.chat_models import ChatOpenAI`"
)
return values
@property
def _default_params(self) -> Dict[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | """Get the default parameters for calling OpenAI API."""
return self.model_kwargs
def _get_chat_params(
self, prompts: List[str], stop: Optional[List[str]] = None
) -> Tuple:
if len(prompts) > 1:
raise ValueError(
f"OpenAIChat currently only supports single prompt, got {prompts}"
)
messages = self.prefix_messages + [{"role": "user", "content": prompts[0]}]
params: Dict[str, Any] = {**{"model": self.model_name}, **self._default_params}
if stop is not None:
if "stop" in params:
raise ValueError("`stop` found in both the input and default params.")
params["stop"] = stop
if params.get("max_tokens") == -1:
# for Ch
del params["max_tokens"]
return messages, params
def _generate(
self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> LLMResult:
messages, params = self._get_chat_params(prompts, stop)
if self.streaming:
response = ""
params["stream"] = True |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,331 | Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI | ### Issue you'd like to raise.
Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202).
The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value.
This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening.
### Suggestion:
There are three ways to solve it:
1. Raise an error or warning if model is used as an argument and suggest using model_name instead
2. Raise a warning if model is defined differently from model_name
3. Change from model_name to model to make it consistent with OpenAI's API
I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough. | https://github.com/langchain-ai/langchain/issues/4331 | https://github.com/langchain-ai/langchain/pull/4366 | 02ebb15c4a92a23818c2c17486bdaf9f590dc6a5 | ba0057c07712e5e725c7c5e14c02d223783b183c | "2023-05-08T10:49:23Z" | python | "2023-05-08T23:37:34Z" | langchain/llms/openai.py | for stream_resp in completion_with_retry(self, messages=messages, **params):
token = stream_resp["choices"][0]["delta"].get("content", "")
response += token
if run_manager:
run_manager.on_llm_new_token(
token,
)
return LLMResult(
generations=[[Generation(text=response)]],
)
else:
full_response = completion_with_retry(self, messages=messages, **params)
llm_output = {
"token_usage": full_response["usage"],
"model_name": self.model_name,
}
return LLMResult(
generations=[
[Generation(text=full_response["choices"][0]["message"]["content"])]
],
llm_output=llm_output,
)
async def _agenerate(
self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
) -> LLMResult:
messages, params = self._get_chat_params(prompts, stop)
if self.streaming: |