Skip to content

ai ¤

This command sends information on the current debugging context to OpenAI's GPT-3 large language model and asks it a question supplied by the user. It then displays GPT-3's response to that question to the user.

Functions:

Attributes:

last_question module-attribute ¤

last_question: list[str] = []

last_answer module-attribute ¤

last_answer: list[str] = []

last_pc module-attribute ¤

last_pc = None

last_command module-attribute ¤

last_command = None

dummy module-attribute ¤

dummy = False

verbosity module-attribute ¤

verbosity = 0

parser module-attribute ¤

parser = ArgumentParser(
    description="Ask GPT-3 a question about the current debugging context."
)

set_dummy_mode ¤

set_dummy_mode(d=True) -> None

get_openai_api_key ¤

get_openai_api_key()

get_anthropic_api_key ¤

get_anthropic_api_key()

get_ollama_endpoint ¤

get_ollama_endpoint()

build_prompt ¤

build_prompt(question, command=None)

flatten_prompt ¤

flatten_prompt(conversation)

build_context_prompt_body ¤

build_context_prompt_body()

build_command_prompt_body ¤

build_command_prompt_body(command)

query_openai_chat ¤

query_openai_chat(
    prompt, model="gpt-3.5-turbo", max_tokens=100, temperature=0.0
)

query_openai_completions ¤

query_openai_completions(
    prompt, model="text-davinci-003", max_tokens=100, temperature=0.0
)

query ¤

query(prompt, model='text-davinci-003', max_tokens=100, temperature=0.0)

query_anthropic ¤

query_anthropic(prompt, model='claude-v1', max_tokens=100, temperature=0.0)

query_ollama ¤

query_ollama(prompt, model='mistral', max_tokens=100, temperature=0.0)

get_openai_models ¤

get_openai_models()

ai ¤

ai(
    question,
    model,
    temperature,
    max_tokens,
    verbose,
    list_models=False,
    command=None,
) -> None