./About%20Cody%20FAQ_files/sourcegraph-mark.svg

./About%20Cody%20FAQ_files/sourcegraph-logo.svg

[data:image/svg+xml;base64,PHN2ZyBjbGFzcz0ibWRpLWljb24gdGV4dC1tdXRlZCBzZWFyY2gtaWNvbiIgd2lkdGg9IjEuMWVtIiBoZWlnaHQ9IjEuMWVtIiBmaWxsPSJjdXJyZW50Q29sb3IiIHZpZXdib3g9IjAgMCAyNCAyNCI+PHBhdGggZD0iTTkuNSwzQTYuNSw2LjUgMCAwLDEgMTYsOS41QzE2LDExLjExIDE1LjQxLDEyLjU5IDE0LjQ0LDEzLjczTDE0LjcxLDE0SDE1LjVMMjAuNSwxOUwxOSwyMC41TDE0LDE1LjVWMTQuNzFMMTMuNzMsMTQuNDRDMTIuNTksMTUuNDEgMTEuMTEsMTYgOS41LDE2QTYuNSw2LjUgMCAwLDEgMyw5LjVBNi41LDYuNSAwIDAsMSA5LjUsM005LjUsNUM3LDUgNSw3IDUsOS41QzUsMTIgNywxNCA5LjUsMTRDMTIsMTQgMTQsMTIgMTQsOS41QzE0LDcgMTIsNSA5LjUsNVoiIC8+PC9zdmc+](data:image/svg+xml;base64,PHN2ZyBjbGFzcz0ibWRpLWljb24gdGV4dC1tdXRlZCBzZWFyY2gtaWNvbiIgd2lkdGg9IjEuMWVtIiBoZWlnaHQ9IjEuMWVtIiBmaWxsPSJjdXJyZW50Q29sb3IiIHZpZXdib3g9IjAgMCAyNCAyNCI+PHBhdGggZD0iTTkuNSwzQTYuNSw2LjUgMCAwLDEgMTYsOS41QzE2LDExLjExIDE1LjQxLDEyLjU5IDE0LjQ0LDEzLjczTDE0LjcxLDE0SDE1LjVMMjAuNSwxOUwxOSwyMC41TDE0LDE1LjVWMTQuNzFMMTMuNzMsMTQuNDRDMTIuNTksMTUuNDEgMTEuMTEsMTYgOS41LDE2QTYuNSw2LjUgMCAwLDEgMyw5LjVBNi41LDYuNSAwIDAsMSA5LjUsM005LjUsNUM3LDUgNSw3IDUsOS41QzUsMTIgNywxNCA5LjUsMTRDMTIsMTQgMTQsMTIgMTQsOS41QzE0LDcgMTIsNSA5LjUsNVoiIC8+PC9zdmc+)

What is Cody?

Cody is an AI coding assistant that lives in your editor that can find, explain, and write code. Cody uses a combination of AI (specifically Large Language Models or LLMs), Sourcegraph search, and Sourcegraph code intelligence to provide answers that eliminate toil and keep human programmers in flow. You can think of Cody as your programmer buddy who has read through all the code on GitHub, all the questions on StackOverflow, and all your organization’s private code, and is always there to answer questions you might have or suggest ways of doing something based on prior knowledge.

Is Cody for Enterprise customers? Or for individual devs?

There are two ways to use Cody:

How does Cody work?

To provide responses to requests, Cody does the following:

  1. A user asks Cody a question (or to write some code).
  2. Cody fetches relevant code snippets.
    1. Unlike Copilot, Cody knows about entire codebases, and fetches snippets directly relevant to you.
    2. Sourcegraph uses a combination of code search, code graph (SCIP), intelligent ranking, and an AI vector database to respond with snippets that are relevant to the user’s request.
  3. Sourcegraph passes a selection of these results along with the original question to a Large Language Model like Claude or OpenAI’s ChatGPT.
  4. The Large Language Model uses the contextual info from Sourcegraph to generate a factual answer and sends it to Cody.
  5. Cody then validates the output of the Large Language Model and sends the answer back to the user.

How is Cody different from ChatGPT?

Cody uses a ChatGPT-like model as a component in its architecture (today we use Claude, but we could alternatively use ChatGPT or a similar Large Language Model). ChatGPT lacks the ability to search for contextual code snippets and docs, so its knowledge is therefore limited to open source it was trained on. It does not know about recent changes to code or your private codebase. Rather than telling you when it doesn’t know, ChatGPT will just confidently make stuff up that sounds correct but is false. The contextual snippets that Cody fetches from Sourcegraph are crucial to enabling Cody to generate factually accurate responses.

How is Cody different from GitHub Copilot?