Remove obsoleted extension source

main
Jeff Moe 2024-05-07 10:49:59 -06:00
parent 9e9c01a66b
commit 21efa74b63
566 changed files with 0 additions and 106492 deletions

149
src/.gitignore vendored
View File

@ -1,149 +0,0 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
**/node_modules
**/out
notes.txt
cached_embeddings.pkl
.ruff_cache
codeql
**/.continue
.DS_Store
.continue
.test
.tiktoken_cache
# IntelliJ Plugin
**/**/.gradle
**/**/.idea
**/**/.qodana
**/**/build

View File

@ -1,235 +0,0 @@
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for software and other kinds of works, specifically designed to ensure cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, our General Public Licenses are intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License which gives you legal permission to copy, distribute and/or modify the software.
A secondary benefit of defending all users' freedom is that improvements made in alternate versions of the program, if they receive widespread use, become available for other developers to incorporate. Many developers of free software are heartened and encouraged by the resulting cooperation. However, in the case of software used on network servers, this result may fail to come about. The GNU General Public License permits making a modified version and letting the public access it on a server without ever releasing its source code to the public.
The GNU Affero General Public License is designed specifically to ensure that, in such cases, the modified source code becomes available to the community. It requires the operator of a network server to provide the source code of the modified version running there to the users of that server. Therefore, public use of a modified version, on a publicly accessible server, gives the public access to the source code of the modified version.
An older license, called the Affero General Public License and published by Affero, was designed to accomplish similar goals. This is a different license, not a version of the Affero GPL, but Affero has released a new version of the Affero GPL which permits relicensing under this license.
The precise terms and conditions for copying, distribution and modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based on the Program.
To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work.
A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.
When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified it, and giving a relevant date.
b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices".
c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.
A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:
a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b.
d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d.
A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.
"Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.
If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).
The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or authors of the material; or
e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors.
All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.
If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.
A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software. This Corresponding Source shall include the Corresponding Source for any work covered by version 3 of the GNU General Public License that is incorporated pursuant to the following paragraph.
Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the work with which it is combined will remain governed by version 3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of the GNU Affero General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU Affero General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU Affero General Public License, you may choose any version ever published by the Free Software Foundation.
If the Program specifies that a proxy can decide which future versions of the GNU Affero General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program.
Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer network, you should also make sure that it provides a way for users to get its source. For example, if your program is a web application, its interface could display a "Source" link that leads users to an archive of the code. There are many ways you could offer source, and different solutions will be better for different programs; see section 13 for the specific requirements.
You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU AGPL, see <http://www.gnu.org/licenses/>.

View File

@ -1,33 +0,0 @@
# Makefile
#
# Copyright (C) 2023, Jeff Moe
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
all:
$(MAKE) setup
$(MAKE) build
setup:
virtualenv env ; . env/bin/activate ; pip install -r requirements.txt ; pip install pyinstaller
build:
./build.sh
clean:
rm -rf build
rm -rf env
rm -rf dist
rm -rf server/.venv
rm -rf .tiktoken_cache

View File

@ -1,83 +0,0 @@
> 🎁 **New! [Try out the new JetBrains extension (Alpha)](https://plugins.jetbrains.com/plugin/22707-continue-extension)**
> Interested in a 1-on-1, 15 minute introduction to Continue? Fill out [this form](https://forms.gle/H6U6rGDX55oWSWjC8) and we'll get in touch!
![Continue logo](media/c_d.png)
<h1 align="center">Continue</h1>
<div align="center">
**[Continue](https://continue.dev/docs) is the open-source autopilot for software development—an IDE extension that brings the power of ChatGPT to [VS Code](https://marketplace.visualstudio.com/items?itemName=Continue.continue) and [JetBrains](https://plugins.jetbrains.com/plugin/22707-continue-extension)**
</div>
<div align="center">
<a target="_blank" href="https://opensource.org/licenses/Apache-2.0" style="background:none">
<img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" style="height: 36px;" />
</a>
<a target="_blank" href="https://continue.dev/docs" style="background:none">
<img src="https://img.shields.io/badge/continue_docs-%23BE1B55" style="height: 36px;" />
</a>
<a target="_blank" href="https://discord.gg/vapESyrFmJ" style="background:none">
<img src="https://img.shields.io/badge/discord-join-continue.svg?labelColor=191937&color=6F6FF7&logo=discord" style="height: 36px;" />
</a>
<p></p>
![Editing With Continue](media/readme.gif)
</div>
## Task, not tab, auto-complete
### Answer coding questions
Highlight sections of code and ask Continue for another perspective
- “what does this forRoot() static function do in nestjs?”
- “why is the first left join in this query necessary here?”
- “how do I run a performance benchmark on this rust binary?”
### Edit in natural language
Highlight a section of code and instruct Continue to refactor it
- “/edit rewrite this to return a flattened list from a 3x3 matrix”
- “/edit refactor these into an angular flex layout on one line"
- “/edit define a type here for a list of lists of dictionaries”
### Generate files from scratch
Open a blank file and let Continue start new Python scripts, React components, etc.
- “/edit get me started with a basic supabase edge function”
- “/edit implement a c++ shortest path algo in a concise way”
- “/edit create a docker compose file with php and mysql server"
### Understand errors and exceptions
Press `cmd+shift+r` (MacOS) / `ctrl+shift+r` (Windows) when you come across an error or exception in your terminal. This will throw the stack trace into Continue and ask for it to explain the issue to you.
## Getting Started
### Download for [VS Code](https://marketplace.visualstudio.com/items?itemName=Continue.continue) and [JetBrains](https://plugins.jetbrains.com/plugin/22707-continue-extension)
By default, Continue uses `GPT-4` and `GPT-3.5-turbo` via the OpenAI API. You can adjust the config to use different Large Language Models (LLMs) like [Code Llama, Claude 2, WizardCoder, PaLM 2, and more](https://github.com/continuedev/what-llm-to-use). Read more [here](https://continue.dev/docs/customization/models).
### [Run the server manually](https://continue.dev/docs/walkthroughs/manually-run-continue)
You might want to run Continue manually if (a) a firewall, VPN, or other issue is stopping Continue from automatically downloading the server binary, (b) you are on an OS where the binary fails to run (e.g. RHEL8), (c) you are using an air-gapped computer, (d) you want to self-host Continue, or (e) you want to run from source while developing / modifying Continue's code.
### [Run in "headless mode"](https://continue.dev/docs/walkthroughs/headless-mode)
"Headless mode" allows Continue to run in the background, without needing to be connected to the IDE or GUI. This is useful for performing refactors or other long-running tasks asynchronously. Headless mode can also be run in CI/CD, for example, to perform a thorough review for errors.
## Contributing
Check out the [contribution ideas board](https://github.com/orgs/continuedev/projects/2), read the [contributing guide](https://github.com/continuedev/continue/blob/main/CONTRIBUTING.md), and join [#contribute on Discord](https://discord.gg/vapESyrFmJ)
## License
[Apache 2.0 © 2023 Continue Dev, Inc.](./LICENSE)

View File

@ -1,209 +0,0 @@
NOTICES
This repository incorporates material as listed below or described in the code.
---------------------------------------------------------
Copyright 2023 Continue Dev, Inc.
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2023 Continue Dev, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,44 +0,0 @@
#!/bin/sh
# 1. Remove unwanted stuff
rm -rf build
rm -rf env
rm -rf dist
rm -rf server/.venv
rm -rf .tiktoken_cache
# 2. Create a new virtual environment and activate it
python3 -m venv env
. env/bin/activate
# 3. Install the required packages
pip install -r server/requirements.txt || exit 1
pip install pyinstaller || exit 1
# 4. Detect M1 architecture or allow manual override
USE_ARCH="intel"
if [ "$1" = "m1" ]; then
echo "Building for M1 architecture"
USE_ARCH="m1"
elif [ "$1" = "regular" ]; then
echo "Building for regular architecture"
USE_ARCH="intel"
else
ARCH=$(uname -m)
if [ "$ARCH" = "arm64" ]; then
echo "$ARCH architecture detected, using M1 spec file"
USE_ARCH="m1"
else
echo "$ARCH architecture detected, using regular spec file"
USE_ARCH="intel"
fi
fi
# 4.5. Make .tiktoken_cache directory, used to package with tiktoken vocab file
mkdir .tiktoken_cache
# 5. Call PyInstaller from within the virtual environment
env/bin/pyinstaller continue_server.spec -- --arch $USE_ARCH
# 6. Deactivate the virtual environment
deactivate

View File

@ -1,18 +0,0 @@
import os
import sys
from server.main import main
# __import__('pysqlite3')
# import sys
# sys.modules['sqlite3'] = sys.modules.pop('pysqlite3')
if getattr(sys, "frozen", False) and hasattr(sys, "_MEIPASS"):
ca_bundle_path = os.path.join(sys._MEIPASS, "ca_bundle", "cacert.pem")
print("Certificates at: ", ca_bundle_path)
os.environ["SSL_CERT_FILE"] = ca_bundle_path
os.environ["REQUESTS_CA_BUNDLE"] = ca_bundle_path
if __name__ == "__main__":
print("Running Continue server version 0.0.350")
main()

View File

@ -1,126 +0,0 @@
# -*- mode: python ; coding: utf-8 -*-
import certifi
import os
import sys
from PyInstaller.utils.hooks import copy_metadata
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--arch", type=str)
parser.add_argument("--dir", action="store_true")
options = parser.parse_args()
block_cipher = None
import subprocess
def find_package_location(package_name):
try:
# Run the 'pip show' command and capture its output
result = subprocess.run(['pip', 'show', package_name], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, check=True)
output = result.stdout
# Split the output into lines and find the 'Location' field
for line in output.splitlines():
if line.startswith('Location:'):
# Extract the path after the 'Location:' prefix
location = line.split(':', 1)[1].strip()
return location
except subprocess.CalledProcessError as e:
print(f"Error: {e.stderr}")
return None
chroma_path = find_package_location('chromadb')
chroma_toc = list(map(lambda x: (x[1], os.path.dirname(x[0])), Tree(f'{chroma_path}/chromadb/migrations', prefix="chromadb/migrations")))
tsl_path = find_package_location('tree_sitter_languages')
tsl_filename = "languages.dll" if sys.platform == "win32" else "languages.so"
a = Analysis(
['continue_server.py'],
pathex=[],
binaries=[(os.path.join(tsl_path, 'tree_sitter_languages', tsl_filename), "tree_sitter_languages")],
datas=[
('server/continuedev', 'continuedev'),
(certifi.where(), 'ca_bundle'),
('.tiktoken_cache', 'tiktoken_cache'),
] + copy_metadata('replicate') + chroma_toc,
hiddenimports=[
'anthropic', 'github', 'ripgrepy', 'bs4', 'redbaron', 'tree_sitter', 'tree_sitter_languages',
'chromadb', 'onnxruntime',
'chromadb.telemetry.posthog',
'chromadb.api.segment', 'chromadb.db.impl',
'chromadb.db.impl.sqlite', 'chromadb.migrations',
'chromadb.migrations.embeddings_queue', 'chromadb.migrations.sysdb',
'chromadb.migrations.metadb', 'chromadb.segment.impl',
'chromadb.segment.impl.manager', 'chromadb.segment.impl.manager.local',
'chromadb.segment.impl.metadata', 'chromadb.segment.impl.metadata.sqlite',
# 'pysqlite3'
],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False,
)
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)
target_arch = "arm64" if options.arch == "m1" else None
print("Using target arch", target_arch)
if options.dir:
print("Using directory")
exe = EXE(
pyz,
a.scripts,
exclude_binaries=True,
name='continue_server',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=True,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=target_arch,
codesign_identity=None,
entitlements_file=None,
)
coll = COLLECT(
exe,
a.binaries,
a.datas,
name='continue_server',
)
else:
print("Using one file")
exe = EXE(
pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
[],
name='continue_server',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=True,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
)

20
src/docs/.gitignore vendored
View File

@ -1,20 +0,0 @@
# Dependencies
/node_modules
# Production
/build
# Generated files
.docusaurus
.cache-loader
# Misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local
npm-debug.log*
yarn-debug.log*
yarn-error.log*

View File

@ -1,7 +0,0 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html),
and is generated by [Changie](https://github.com/miniscruff/changie).

View File

@ -1,3 +0,0 @@
# Continue Docs
Markdown content exists in the `docs/` folder, nested as it will be shown in the sidebar. `docusaurus.config.js` defines important footer, sidebar, and title content for the site.

View File

@ -1,3 +0,0 @@
module.exports = {
presets: [require.resolve('@docusaurus/core/lib/babel/preset')],
};

View File

@ -1,277 +0,0 @@
---
title: Config File Migration
description: Migrating from config.py to config.json
keywords: [json, config, configuration, migration]
---
# Migration to `config.json`
On November 20, 2023, we migrated to using JSON as the primary config file format. If you previously used Continue, we will have attempted to automatically translate your existing config.py into a config.json file. If this fails, we will fallback to a default config.json. Your previous config.py will still be kept, but moved to config.py.old for reference. Below you can find a list of changes that were made in case you need to manually migrate your config, as well as examples of proper config.json files.
The JSON format provides stronger guiderails, making it easier to write a valid config, while still allowing Intellisense in VS Code.
If you need any help migrating, please reach out to us on Discord.
## Configuration as Code
For configuration that requires code, we now provide a simpler interface that works alongside config.json. In the same folder, `~/.continue`, create a file named `config.py` (the same name as before) and add a function called `modify_config`. This function should take a [`ContinueConfig`](https://github.com/continuedev/continue/blob/main/server/continuedev/core/config.py) object as its only argument, and return a `ContinueConfig` object. This object is essentially the same as the one that was previously defined in `config.py`. This allows you to modify the initial configuration object defined in your `config.json`. Here's an example that cuts the temperature in half:
```python
from continuedev.core.config import ContinueConfig
def modify_config(config: ContinueConfig) -> ContinueConfig:
config.completion_options.temperature /= 2
return config
```
To summarize, these are the steps taken to load your configuration:
1. Load `~/.continue/config.json`
2. Convert this into a `ContinueConfig` object
3. If `~/.continue/config.py` exists and has defined `modify_config` correctly, call `modify_config` with the `ContinueConfig` object to generate the final configuration
## List of Changes
### `completion_options`
The properties `top_p`, `top_k`, `temperature`, `presence_penalty`, and `frequency_penalty` have been moved into a single object called `completion_options`. It can be specified at the top level of the config or within a `models` object.
### `request_options`
The properties `timeout`, `verify_ssl`, `ca_bundle_path`, `proxy`, and `headers` have been moved into a single object called `request_options`, which can be specified for each `models` object.
### The `model` property
Instead of writing something like `Ollama(model="phind-codellama:34b", ...)`, where the `model` property was different depending on the provider and had to be exactly correct, we now offer a default set of models, including the following:
```python
# OpenAI
"gpt-3.5-turbo",
"gpt-3.5-turbo-16k",
"gpt-4",
"gpt-3.5-turbo-0613",
"gpt-4-32k",
"gpt-4-1106-preview",
# Open-Source
"mistral-7b",
"llama2-7b",
"llama2-13b",
"codellama-7b",
"codellama-13b",
"codellama-34b",
"phind-codellama-34b",
"wizardcoder-7b",
"wizardcoder-13b",
"wizardcoder-34b",
"zephyr-7b",
"codeup-13b",
"deepseek-1b",
"deepseek-7b",
"deepseek-33b",
# Anthropic
"claude-2",
# Google PaLM
"chat-bison-001",
```
If you want to use a model not listed here, you can still do that by specifying whichever value of `model` you need. But if there's something you think we should add as a default, let us know!
### Prompt template auto-detection
Based on the `model` property, we now attempt to [autodetect](https://github.com/continuedev/continue/blob/108e00c7db9cad110c5df53bdd0436b286b92466/server/continuedev/core/config_utils/shared.py#L38) the prompt template. If you want to be explicit, you can select one of our prompt template types (`"llama2", "alpaca", "zephyr", "phind", "anthropic", "chatml", "deepseek"`) or write a custom prompt template in `config.py`.
### `PromptTemplate`
If you were previously using the `PromptTemplate` class in your `config.py` to write a custom template, we have moved it from `continuedev.libs.llm.base` to `continuedev.models.llm`.
## Examples of `config.json`
After the "Full example" these examples will only show the relevant portion of the config file.
### Full example, with OpenAI Free Trial
```json
{
"models": [
{
"title": "GPT-4",
"provider": "openai-free-trial",
"model": "gpt-4"
},
{
"title": "GPT-3.5-Turbo",
"provider": "openai-free-trial",
"model": "gpt-3.5-turbo"
}
],
"system_message": "Always be kind",
"completion_options": {
"temperature": 0.5
},
"model_roles": {
"default": "GPT-4",
"summarize": "GPT-3.5-Turbo"
},
"slash_commands": [
{
"name": "edit",
"description": "Edit highlighted code",
"step": "EditHighlightedCodeStep"
},
{
"name": "config",
"description": "Customize Continue",
"step": "OpenConfigStep"
},
{
"name": "comment",
"description": "Write comments for the highlighted code",
"step": "CommentCodeStep"
},
{
"name": "share",
"description": "Download and share this session",
"step": "ShareSessionStep"
},
{
"name": "cmd",
"description": "Generate a shell command",
"step": "GenerateShellCommandStep"
}
],
"custom_commands": [
{
"name": "test",
"prompt": "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
"description": "Write unit tests for highlighted code"
}
],
"context_providers": [{ "name": "terminal" }, { "name": "diff" }]
}
```
### Ollama with CodeLlama 13B
```json
{
"models": [
{
"title": "Ollama",
"provider": "ollama",
"model": "codellama-13b"
}
]
}
```
### Claude 2
```json
{
"models": [
{
"title": "Claude-2",
"provider": "anthropic",
"model": "claude-2",
"api_key": "sk-ant-api03-REST_OF_API_KEY",
"context_length": 100000
}
]
}
```
### LM Studio with Phind Codellama 34B
```json
{
"models": [
{
"title": "LM Studio",
"provider": "lmstudio",
"model": "phind-codellama-34b"
}
]
}
```
### OpenAI-compatible API
This is an example of serving a model using an OpenAI-compatible API on http://localhost:8000.
```json
{
"models": [
{
"title": "OpenAI-compatible API",
"provider": "openai",
"model": "codellama-13b",
"api_base": "http://localhost:8000"
}
]
}
```
### Azure OpenAI
```json
{
"models": [
{
"title": "Azure OpenAI",
"provider": "openai",
"model": "gpt-3.5-turbo",
"api_key": "my-api-key",
"api_base": "https://my-azure-openai-instance.openai.azure.com/",
"engine": "my-azure-openai-deployment",
"api_version": "2023-07-01-preview",
"api_type": "azure"
}
]
}
```
### TogetherAI
```json
{
"models": [
{
"title": "Phind CodeLlama",
"provider": "together",
"model": "phind-codellama-34b",
"api_key": "<your-api-key>"
}
]
}
```
### Temperature, top_p, etc...
The `completions_options` for each model will override the top-level `completion_options`. For example, the "GPT-4" model here will have a temperature of 0.8, while the "GPT-3.5-Turbo" model will have a temperature of 0.5.
```json
{
"models": [
{
"title": "GPT-4",
"provider": "openai-free-trial",
"model": "gpt-4",
"completion_options": {
"top_p": 0.9,
"top_k": 40,
"temperature": 0.8
}
},
{
"title": "GPT-3.5-Turbo",
"provider": "openai-free-trial",
"model": "gpt-3.5-turbo"
}
],
"completion_options": {
"temperature": 0.5,
"presence_penalty": 0.5,
"frequency_penalty": 0.5
}
}
```

View File

@ -1,19 +0,0 @@
# Code Configuration
To allow added flexibility and eventually support an entire plugin ecosystem, Continue can be configured programmatically in a Python file, `~/.continue/config.py`.
Whenever Continue loads, it carries out the following steps:
1. Load `~/.continue/config.json`
2. Convert this into a `ContinueConfig` object
3. If `~/.continue/config.py` exists and has defined `modify_config` correctly, call `modify_config` with the `ContinueConfig` object to generate the final configuration
Defining a `modify_config` function allows you to make any final modifications to your initial `config.json`. Here's an example that cuts the temperature in half:
```python title="~/.continue/config.py"
from continuedev.core.config import ContinueConfig
def modify_config(config: ContinueConfig) -> ContinueConfig:
config.completion_options.temperature /= 2
return config
```

View File

@ -1,164 +0,0 @@
---
title: Context Providers
description: Type '@' to select content to the LLM as context
keywords: [context, "@", provider, LLM]
---
# Context Providers
Context Providers allow you to type '@' and see a dropdown of content that can all be fed to the LLM as context. Every context provider is a plugin, which means if you want to reference some source of information that you don't see here, you can request (or build!) a new context provider.
As an example, say you are working on solving a new GitHub Issue. You type '@issue' and select the one you are working on. Continue can now see the issue title and contents. You also know that the issue is related to the files 'readme.md' and 'helloNested.py', so you type '@readme' and '@hello' to find and select them. Now these 3 "Context Items" are displayed above the input.
![Context Items](/img/context-provider-example.png)
When you enter your next input, Continue will see the full contents of each of these items, and can use them to better answer your questions throughout the conversation.
## Built-in Context Providers
To use any of the built-in context providers, open `~/.continue/config.json` (can do this with the '/config' slash command) and add it to the `context_providers` list.
### GitHub
Type '@issue' to reference the title and contents of a GitHub issue.
```json
{
"name": "github",
"params": {
// Change to whichever repo you want to use
"repo_name": "continuedev/continue",
"auth_token": "<my_github_auth_token>"
}
}
```
### Codebase Search
Type '@search' to reference the results of codebase search, just like the results you would get from VS Code search.
```json
{ "name": "search" }
```
### URLs
Type '@url' to reference the contents of a URL. You can either reference preset URLs, or reference one dynamically by typing '@url https://example.com'. The text contents of the page will be fetched and used as context.
```json
{
"name": "url",
"params": { "preset_urls": ["https://continue.dev/docs/customization"] }
}
```
### Git Diff
Type '@diff' to reference all of the changes you've made to your current branch. This is useful if you want to summarize what you've done or ask for a general review of your work before committing.
```json
{ "name": "diff" }
```
### File Tree
Type '@tree' to reference the contents of your current workspace. The LLM will be able to see the nested directory structure of your project.
```json
{ "name": "tree" }
```
### Google
Type '@google' to reference the results of a Google search. For example, type "@google python tutorial" if you want to search and discuss ways of learning Python.
```json
{
"name": "google",
"params": { "serper_api_key": "<your serper.dev api key>" }
}
```
Note: You can get an API key for free at [serper.dev](https://serper.dev).
### Terminal
Type '@terminal' to reference the contents of your IDE's terminal.
```json
{ "name": "terminal" }
```
### Requesting Context Providers
Not seeing what you want? Create an issue [here](https://github.com/continuedev/continue/issues/new?assignees=TyDunn&labels=enhancement&projects=&template=feature-request-%F0%9F%92%AA.md&title=) to request a new ContextProvider.
## Building Your Own Context Provider
### Introductory Example
As an example, here is the `GitHubIssuesContextProvider`, which lets you search all open GitHub Issues in a repo:
```python
class GitHubIssuesContextProvider(ContextProvider):
"""
The GitHubIssuesContextProvider is a ContextProvider that allows you to search GitHub issues in a repo.
"""
title = "issues"
repo_name: str
auth_token: str
async def provide_context_items(self) -> List[ContextItem]:
auth = Auth.Token(self.auth_token)
gh = Github(auth=auth)
repo = gh.get_repo(self.repo_name)
issues = repo.get_issues().get_page(0)
return [ContextItem(
content=issue.body,
description=ContextItemDescription(
name=f"Issue #{issue.number}",
description=issue.title,
id=ContextItemId(
provider_title=self.title,
item_id=issue.id
)
)
) for issue in issues]
```
It can then be set in the `ContinueConfig` like so:
```python title="~/.continue/config.py"
def modify_config(config: ContinueConfig) -> ContinueConfig:
config.context_providers.append(GitHubIssuesContextProvider(
repo_name="my-github-username-or-org/my-github-repo",
auth_token="my-github-auth-token"
))
return config
```
This example is a situation where you request all of the data (issues in this case) beforehand, and store them in the ContextProvider.
### Dynamic Context Providers
There are other scenarios where you might want to just get information on demand, for example by typing '@url https://continue.dev/docs/context-providers' and having the ContextProvider fetch the contents of that URL dynamically. For this case, you can implement the `DynamicContextProvider` class like this:
```python
from continuedev.plugins.context_providers.dynamic import DynamicContextProvider
class ExampleDynamicProvider(DynamicProvider):
title = "example"
name = "Example"
description = "Example description"
async def get_content(self, query: str) -> str:
return f"Example content for '{query}'"
async def setup(self):
print("Example setup")
```
The `setup` method optionally allows you to do any setup when Continue is first loaded. The `get_content` method takes the query (which would be 'https://continue.dev/docs/context-providers' in the example above) and returns the content that will be used as context.

View File

@ -1,157 +0,0 @@
---
title: Models
description: Swap out different LLM providers
keywords: [openai, anthropic, PaLM, ollama, ggml]
---
# Models
Continue makes it easy to swap out different LLM providers. You can either click the "+" button next to the model dropdown to configure in the UI or manually add them to your `config.json`. Once you've done this, you will be able to switch between them with the model selection dropdown.
Commercial Models
- [OpenAIFreeTrial](../reference/Models/openaifreetrial.md) (default) - Use gpt-4 or gpt-3.5-turbo free with our API key, or with your API key. gpt-4 is probably the most capable model of all options.
- [OpenAI](../reference/Models/openai.md) - Use any OpenAI model with your own key. Can also change the base URL if you have a server that uses the OpenAI API format, including using the Azure OpenAI service, LocalAI, etc.
- [AnthropicLLM](../reference/Models/anthropicllm.md) - Use claude-2 with your Anthropic API key. Claude 2 is also highly capable, and has a 100,000 token context window.
- [GooglePaLMAPI](../reference/Models/googlepalmapi.md) - Try out the `chat-bison-001` model, which is currently in public preview, after creating an API key in [Google MakerSuite](https://makersuite.google.com/u/2/app/apikey)
Local Models
- [Ollama](../reference/Models/ollama.md) - If you are on Mac or Linux, Ollama is the simplest way to run open-source models like Code Llama.
- [OpenAI](../reference/Models/openai.md) - If you have access to an OpenAI-compatible server (e.g. llama-cpp-python, LocalAI, FastChat, TextGenWebUI, etc.), you can use the `OpenAI` class and just change the base URL.
- [GGML](../reference/Models/ggml.md) - An alternative way to connect to OpenAI-compatible servers. Will use `aiohttp` directly instead of the `openai` Python package.
- [LlamaCpp](../reference/Models/llamacpp.md) - Build llama.cpp from source and use its built-in API server.
Open-Source Models (not local)
- [TogetherLLM](../reference/Models/togetherllm.md) - Use any model from the [Together Models list](https://docs.together.ai/docs/inference-models) with your Together API key.
- [ReplicateLLM](../reference/Models/replicatellm.md) - Use any open-source model from the [Replicate Streaming List](https://replicate.com/collections/streaming-language-models) with your Replicate API key.
- [HuggingFaceInferenceAPI](../reference/Models/huggingfaceinferenceapi.md) - Use any open-source model from the [Hugging Face Inference API](https://huggingface.co/inference-api) with your Hugging Face token.
## Change the default LLM
In `config.json`, you'll find the `models` property, a list of the models that you have saved to use with Continue:
```json
"models": [
{
"title": "Smart Model",
"provider": "openai-free-trial",
"model": "gpt-4"
},
{
"title": "Fast Model",
"provider": "openai-free-trial",
"model": "gpt-3.5-turbo"
}
]
```
Also in `config.json` is the `model_roles` property. This is optional, but allows you to specify different models to be used for different tasks. The values of each role must match the `title` property of one of the models in `models`. The available roles are:
- `edit` is used for generating code changes when using the '/edit' and '/comment' slash commands
- `chat` is used for all chat responses
- `summarize` is used for creating summaries. The model with this role will be used in the following scenarios:
- generating the Continue session title
- generating a summary of changes shown when you use the '/edit' slash command
- when the Continue session chat messages exceed the context length, they are summarized to avoid complete truncation
- `default` is the fallback, used when the other model roles are not specified
Here's an example the will use GPT-4 for all tasks except summarization, which will use GPT-3.5 Turbo:
```json
"model_roles": {
"default": "Smart Model",
"summarize": "Fast Model"
}
```
Just by specifying the `model` and `provider` properties, we will automatically detect prompt templates and other important information, but if you're looking to do something beyond this basic setup, we'll explain a few other options below.
## Azure OpenAI Service
If you'd like to use OpenAI models but are concerned about privacy, you can use the Azure OpenAI service, which is GDPR and HIPAA compliant. After applying for access [here](https://azure.microsoft.com/en-us/products/ai-services/openai-service), you will typically hear back within only a few days. Once you have access, set up a model in `config.json` like so:
```json
"models": [{
"title": "Azure OpenAI",
"provider": "openai",
"model": "gpt-4",
"api_base": "https://my-azure-openai-instance.openai.azure.com/",
"engine": "my-azure-openai-deployment",
"api_version": "2023-07-01-preview",
"api_type": "azure",
"api_key": "<MY_API_KEY>"
}]
```
The easiest way to find this information is from the chat playground in the Azure OpenAI portal. Under the "Chat Session" section, click "View Code" to see each of these parameters.
## Self-hosting an open-source model
If you want to self-host on Colab, RunPod, HuggingFace, Haven, or another hosting provider, you will need to wire up a new LLM class. It only needs to implement 3 primary methods: `stream_complete`, `complete`, and `stream_chat`, and you can see examples in [`server/continuedev/libs/llm`](https://github.com/continuedev/continue/tree/main/server/continuedev/libs/llm).
If by chance the provider has the exact same API interface as OpenAI, the `OpenAI` class will work for you out of the box, after changing only the `api_base` parameter.
## Customizing the Chat Template
Most open-source models expect a specific chat format, for example llama2 and codellama expect the input to look like `"[INST] How do I write bubble sort in Rust? [/INST]"`. Continue will automatically attempt to detect the correct prompt format based on the `model`value that you provide, but if you are receiving nonsense responses, you can use the`template`property to explicitly set the format that you expect. The options are:`["llama2", "alpaca", "zephyr", "phind", "anthropic", "chatml"]`.
If you want to create an entirely new chat template, this can be done in [config.py](./code-config.md) by defining a function and adding it to the `template_messages` property of your `LLM`. Here is an example of `template_messages` for the Alpaca/Vicuna format:
```python
def template_alpaca_messages(msgs: List[Dict[str, str]]) -> str:
prompt = ""
if msgs[0]["role"] == "system":
prompt += f"{msgs[0]['content']}\n"
msgs.pop(0)
prompt += "### Instruction:\n"
for msg in msgs:
prompt += f"{msg['content']}\n"
prompt += "### Response:\n"
return prompt
```
It can then be used like this:
```python title="~/.continue/config.py"
def modify_config(config: ContinueConfig) -> ContinueConfig:
config.models.default.template_messages = template_alpaca_messages
return config
```
This exact function and a few other default implementations are available in [`continuedev.libs.llm.prompts.chat`](https://github.com/continuedev/continue/blob/main/server/continuedev/libs/llm/prompts/chat.py).
## Customizing the /edit Prompt
You also have access to customize the prompt used in the '/edit' slash command. We already have a well-engineered prompt for GPT-4 and sensible defaults for less powerful open-source models, but you might wish to play with the prompt and try to find a more reliable alternative if you are for example getting English as well as code in your output.
To customize the prompt, use the `prompt_templates` property of any `LLM`, which is a dictionary, and set the "edit" key to a template string with Mustache syntax. The 'file_prefix', 'file_suffix', 'code_to_edit', 'context_items', and 'user_input' variables are available in the template. Here is an example (the default for non-GPT-4 models):
````python
"""
[INST] Consider the following code:
```
{{{code_to_edit}}}
```
Edit the code to perfectly satisfy the following user request:
{{{user_input}}}
Output nothing except for the code. No code block, no English explanation, no start/end tags.
[/INST]
"""
````
It can then be used like this in `config.py`:
```python title="~/.continue/config.py"
def modify_config(config: ContinueConfig) -> ContinueConfig:
config.models.edit.prompt_templates["edit"] = "<INSERT_TEMPLATE_HERE>"
return config
```
A few pre-made templates are available in [`continuedev.libs.llm.prompts.edit`](https://github.com/continuedev/continue/blob/main/server/continuedev/libs/llm/prompts/edit.py).

View File

@ -1,58 +0,0 @@
---
title: Other Configuration
description: Swap out different LLM providers
keywords: [temperature, custom policies, custom system message]
---
# Other Configuration
See the [ContinueConfig Reference](../reference/config) for the full list of configuration options.
## Customize System Message
You can write your own system message, a set of instructions that will always be top-of-mind for the LLM, by setting the `system_message` property to any string. For example, you might request "Please make all responses as concise as possible and never repeat something you have already explained."
System messages can also reference files. For example, if there is a markdown file (e.g. at `/Users/nate/Documents/docs/reference.md`) you'd like the LLM to know about, you can reference it with [Mustache](http://mustache.github.io/mustache.5.html) templating like this: "Please reference this documentation: {{ Users/nate/Documents/docs/reference.md }}". As of now, you must use an absolute path.
## Temperature
Set `temperature` to any value between 0 and 1. Higher values will make the LLM more creative, while lower values will make it more predictable. The default is 0.5.
## Custom Policies
Policies can be used to deeply change the behavior of Continue, or to build agents that take longer sequences of actions on their own. The [`DefaultPolicy`](https://github.com/continuedev/continue/blob/main/server/continuedev/plugins/policies/default.py) handles the parsing of slash commands, and otherwise always chooses the `SimpleChatStep`, but you could customize by for example always taking a "review" step after editing code. To do so, create a new `Policy` subclass that implements the `next` method:
```python title="~/.continue/config.py"
class ReviewEditsPolicy(Policy):
default_step: Step = SimpleChatStep()
def next(self, config: ContinueConfig, history: History) -> Step:
# Get the last step
last_step = history.get_current()
# If it edited code, then review the changes
if isinstance(last_step, EditHighlightedCodeStep):
return ReviewStep() # Not implemented
# Otherwise, choose between EditHighlightedCodeStep and SimpleChatStep based on slash command
if observation is not None and isinstance(last_step.observation, UserInputObservation):
if user_input.startswith("/edit"):
return EditHighlightedCodeStep(user_input=user_input[5:])
else:
return SimpleChatStep()
return self.default_step.copy()
# Don't do anything until the user enters something else
return None
```
Then, in `~/.continue/config.py`, override the default policy:
```python title="~/.continue/config.py"
def modify_config(config: ContinueConfig) -> ContinueConfig:
config.policy_override = ReviewEditsPolicy()
return config
```

View File

@ -1,16 +0,0 @@
---
title: Overview
description: Continue can be deeply customized
keywords: [custom, slash commands, models, context providers]
---
# Overview
Continue can be deeply customized by editing `~/.continue/config.json` (`%userprofile%\.continue\config.json` for Windows) and `config.py` on your machine. These files are created the first time you run Continue.
Currently, you can customize the following:
- [Models](./models.md) - Use Continue with any LLM, including local models, Azure OpenAI service, any OpenAI-compatible API, and more.
- [Context Providers](./context-providers.md) - Just type '@' to easily add attachments to your prompt. Define which sources you want to easily reference, including GitHub Issues, terminal output, and preset URLs.
- [Slash Commands](./slash-commands.md) - Call custom prompts or programs written with our SDK by typing `/`.
- [Other Configuration](./other-configuration.md) - Configure other settings like the system message and temperature.

View File

@ -1,67 +0,0 @@
---
title: Slash Commands
description: Shortcuts that can be activated by prefacing your input with '/'
keywords: [slash command, custom commands, step]
---
# Slash Commands
Slash commands are shortcuts that can be activated by prefacing your input with '/'. For example, the built-in '/edit' slash command let you stream edits directly into your editor.
There are two ways to add custom slash commands:
1. With natural language prompts - this is simpler and only requires writing a string or string template.
2. With a custom `Step` - this gives you full access to the Continue SDK and allows you to write arbitrary Python code.
## "Custom Commands" (Use Natural Language)
You can add custom slash commands by adding to the `custom_commands` property in `config.json`.
- `name`: the name of the command, which will be invoked with `/name`
- `description`: a short description of the command, which will appear in the dropdown
- `prompt`: a set of instructions to the LLM, which will be shown in the prompt
Custom commands are great when you are frequently reusing a prompt. For example, if you've crafted a great prompt and frequently ask the LLM to check for mistakes in your code, you could add a command like this:
```json title="~/.continue/config.json"
custom_commands=[{
"name": "check",
"description": "Check for mistakes in my code",
"prompt": "Please read the highlighted code and check for any mistakes. You should look for the following, and be extremely vigilant:\n- Syntax errors\n- Logic errors\n- Security vulnerabilities\n- Performance issues\n- Anything else that looks wrong\n\nOnce you find an error, please explain it as clearly as possible, but without using extra words. For example, instead of saying 'I think there is a syntax error on line 5', you should say 'Syntax error on line 5'. Give your answer as one bullet point per mistake found."
}]
```
## Custom Slash Commands
If you want to go a step further than writing custom commands with natural language, you can use a `SlashCommand` to run an arbitrary Python function, with access to the Continue SDK. This requires using `config.py` instead of `config.json`, unless you specify a built-in Step name.
To do this, create a subclass of `Step` with the `run` method implemented, and this is the code that will run when you call the command. For example, here is a step that generates a commit message:
```python title="~/.continue/config.py"
class CommitMessageStep(Step):
async def run(self, sdk: ContinueSDK):
# Get the root directory of the workspace
dir = sdk.ide.workspace_directory
# Run git diff in that directory
diff = subprocess.check_output(
["git", "diff"], cwd=dir).decode("utf-8")
# Ask the LLM to write a commit message,
# and set it as the description of this step
resp = await sdk.models.default.complete(
f"{diff}\n\nWrite a short, specific (less than 50 chars) commit message about the above changes:")
yield SetStep(description=resp) # Updates are yielded so the UI can be incrementally updated
def modify_config(config: ContinueConfig) -> ContinueConfig:
config.slash_commands.append(
SlashCommand(
name="commit",
description="Generate a commit message for the current changes",
step=CommitMessageStep,
)
)
return config
```

View File

@ -1,11 +0,0 @@
---
title: Development data
description: Collecting data on how you build software
keywords: [development data, dev data, LLM-aided development]
---
# 🧑‍💻 Development Data
When you use Continue, you automatically collect data on how you build software. By default, this development data is saved to `.continue/dev_data` on your local machine. When combined with the code that you ultimately commit, it can be used to improve the LLM that you or your team use (if you allow).
You can read more about how development data is generated as a byproduct of LLM-aided development and why we believe that you should start collecting it now: [Its time to collect data on how you build software](https://blog.continue.dev/its-time-to-collect-data-on-how-you-build-software)

View File

@ -1,45 +0,0 @@
---
title: How Continue works
description: Overview of the Continue architecture
keywords: [architecture, vs code, jetbrains, ide, manually]
---
# ⚙️ How Continue works
![Continue Architecture Diagram](/img/continue-diagram.png)
## Overview
- Continue is typically used inside of an Integrated Development Environment (IDE) like VS Code or JetBrains
- All units of action in Continue are called steps. Steps can be recursively composed into more complex steps
- Steps have access to the SDK, which enables you to use LLMs in your workflows (e.g. edit a file, call a model, etc)
- The Server facilitates communication between the IDE and the GUI and determines what steps to take next
- The GUI enables you to review every automated step, giving you the opportunity to undo and rerun any or all
- It is also possible to run Continue in headless, asynchronous mode. Please reach out if you are interested in this!
## Supported IDEs
### VS Code (Beta)
Continue can be used as a VS Code extension.
You can install it from the Visual Studio Marketplace [here](https://marketplace.visualstudio.com/items?itemName=Continue.continue).
### JetBrains (Alpha)
Continue can be used as a plugin inside of Intellij, PyCharm, WebStorm, etc.
You can install it from the JetBrains Marketplace [here](https://plugins.jetbrains.com/plugin/22707-continue-extension).
### Add Continue to a new IDE
Here is how you can get started with adding Continue to a new IDE:
1. Let us know that you would like to add Continue to a new IDE by opening an issue [here](https://github.com/continuedev/continue/issues/new/choose)
2. Implement a class that maps each of the actions like "read file" to the API provided by that IDE like [here](https://github.com/continuedev/continue/blob/main/extensions/vscode/src/continueIdeClient.ts)
3. Learn more about what you might also do by looking at this pull request that added initial support for JetBrains [here](https://github.com/continuedev/continue/pull/457)
## Running the server manually
If you would like to run the Continue server manually, rather than allowing the IDE to automatically set it up, you can follow the short tutorial for [Manually Running Continue](./walkthroughs/manually-run-continue.md).

View File

@ -1,149 +0,0 @@
---
title: How to use Continue
description: Using LLMs as you code with Continue
keywords: [how to, edit, refactor, boilerplate, context]
---
# 🧑‍🎓 How to use Continue
:::info
**TL;DR: Using LLMs as you code can accelerate you if you leverage them in the right situations. However, they can also cause you to get lost and confused if you trust them when you should not. This page outlines when and where we think you should and should not use Continue.**
:::
## Introduction
Continue will only be as helpful as the LLM you are using to power the edits and explanations. LLMs sometimes hallucinate, so it might make up a library or invent some syntax that does not exist. If something suggested is not working or seems odd to you, its best to double check with a Google search to make sure you are not falling into a rabbit hole.
As you use Continue more, you will learn when to trust it. A great way to get started is to just play with it and start to get a sense of what works and what does not. Continue always ask you to accept / reject any changes it suggests, so you can always undo if something goes wrong.
If you are trying to use it for a new task and dont have a sense of how much Continue can help you complete it, it can often be helpful to start like this:
1. Highlight the code section(s) that you dont understand and type "tell me how this code works" in the input box
2. If the explanation seems reasonable, then, while still highlighting the code section(s), type "how would you change this code to [INSERT TASK]?"
3. If this explanation is also pretty good, then, while still highlighting the code section(s), type `/edit [INSERT TASK]`
4. If it does not work on first attempt, click `reject` on its suggestions and try again—often it will make a different suggestion each time
5. If it is not giving you what you want after another attempt, click `reject` and try again with more specific / clear instructions, articulating exactly what you want it to do and not to do
6. If this still does not work, then you likely need to break down the task into smaller sub-tasks and ask the LLM to do each of those one at a time or just do it yourself manually
Remember: You are responsible for all code that you ship, whether it was written by you or by an LLM that you directed. This means it is crucial that you review what the LLM writes. To make this easier, we provide natural language descriptions of the actions the LLM took in the Continue GUI.
## When to use Continue
Here are tasks that Continue excels at helping you complete:
### Laborious edits
Continue works well in situations where find and replace does not work (i.e. “/edit change all of these to be like that”)
Examples
- "/edit Use 'Union' instead of a vertical bar here"
- “/edit Make this use more descriptive variable names”
### Writing files from scratch
Continue can help you get started building React components, Python scripts, Shell scripts, Makefiles, unit tests, etc.
Examples
- “/edit write a python script to get posthog events"
- “/edit add a react component for syntax highlighted code"
### Creating boilerplate from scratch
Continue can go even further. For example, it can help build the scaffolding for a Python package, which includes a typer cli app to sort the arguments and print them back out.
Examples
- “/edit use this schema to write me a SQL query that gets recently churned users”
- “/edit create a shell script to back up my home dir to /tmp/"
### Fix highlighted code
After selecting the code section(s), try to refactor it with Continue (e.g “/edit change the function to work like this” or “/edit do this everywhere”)
Examples
- “/edit migrate this digital ocean terraform file into one that works for GCP”
- “/edit rewrite this function to be async”
### Ask about highlighted code or an entire file
If you don't understand how some code works, highlight it and ask "how does this code work?"
Examples
- “where in the page should I be making this request to the backend?”
- “how can I communicate between these iframes?”
### Ask about errors
Continue can also help explain errors / exceptions and offer possible solutions. When you come across an error / exception in your terminal, press `cmd+shift+r` (MacOS) / `ctrl+shift+r` (Windows). This will throw the stack trace into Continue and ask for it to explain the issue to you.
### Figure out what shell command to run
Instead of switching windows and getting distracted, you can ask things like "How do I find running process on port 8000?"
Examples
- "what is the load_dotenv library name?"
- "how do I find running process on port 8000?"
### Ask single-turn open-ended questions
Instead of leaving your IDE, you can ask open-ended questions that you don't expect to turn into multi-turn conversations.
Examples
- “how can I set up a Prisma schema that cascades deletes?”
- "what is the difference between dense and sparse embeddings?"
### Editing small existing files
You can highlight an entire file and ask Continue to improve it as long as the file is not too large.
Examples
- “/edit here is a connector for postgres, now write one for kafka”
- "/edit Rewrite this API call to grab all pages"
### Using context from multiple other files
Similar to how you would make changes manually, focus on one file at a time. But if there is key information in other files, highlight those sections of code too to be used as additional context
### Tasks with a few steps
There are many more tasks that Continue can help you complete. Typically, these will be tasks that don't involve too many steps to complete.
Examples
- “/edit make an IAM policy that creates a user with read-only access to S3”
- “/edit change this plot into a bar chart in this dashboard component”
## When to not use Continue
Here are tasks that Continue is **not** helpful with today:
### Deep debugging
If you are 20 minutes into debugging a complicated issue across many files, then Continue wont be able to help you connect the dots yet. That said, Continue can provide ideas of what you might do at different points if you share what you have figured out along the way and ask for ideas of what to try.
### Multi-file edits in parallel
At the moment, Continue can only edit one file at a time. If you figure out which files need to change, you can direct Continue to help you change them one at a time though.
### Using context of the entire file
If files get too large, it can be difficult for Continue to fit them into the limited LLM context windows. Try to highlight the section of code that include the relevant context. It's rare that you need the entire file.
### Editing large files
Similarly, if you try to edit too many lines at once, you might run into context window limits. It also will likely be very slow to apply the suggestions.
### Highlighting really long lines
If you highlight very long lines (e.g. a complex SVG), you might also run into issues like those above.
### Tasks with many steps
There are other tasks that Continue won't be able to take on entirely at once. However, typically, if you figure out how to break the task into sub-tasks, you can get help from Continue with those.

View File

@ -1,6 +0,0 @@
---
id: index
slug: /
---
This index page should automatically redirect to [/docs/intro](./intro.md)

View File

@ -1,15 +0,0 @@
---
title: Introduction
description: Continue is the open-source autopilot for software development
keywords: [introduction, intro, continue, autopilot, chatgpt]
---
# Introduction
![continue-cover-logo](/img/continue-cover-logo.png)
**Continue is the open-source autopilot for software development—an IDE extension that brings the power of ChatGPT to [VS Code](https://marketplace.visualstudio.com/items?itemName=Continue.continue) and [JetBrains](https://plugins.jetbrains.com/plugin/22707-continue-extension)**
You determine when Large Language Models (LLMs) like [GPT-4](https://openai.com/research/gpt-4) and [Code Llama](https://about.fb.com/news/2023/08/code-llama-ai-for-coding/) should act as an autopilot, helping you complete software development tasks. You highlight some code and then use natural language instructions (and optional slash commands like `/edit`) to tell the LLM what to do.
Many developers have begun to use ChatGPT while coding; however, the experience is painful because of how much copying, pasting, and editing is required to provide the context and incorporate the generated answers into your codebase. Continue eliminates this pain by enabling LLMs to natively act in your IDE as you complete your workflows.

View File

@ -1,48 +0,0 @@
# Assembly
Assembly is the #20 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ Assembly is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ Assembly is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ Assembly is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Assembly is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Assembly makes up 2.36 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ Assembly makes up 0.78 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ Assembly is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Assembly is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ Assembly is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Assembly has 43,572 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Assembly projects have had 14,301 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Assembly projects have had 10,605 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Assembly projects have had 119,341 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Assembly projects have had 50,063 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/the_Demongod](https://www.reddit.com/r/asm/comments/14q5qi8/comment/jqlmfvn/?utm_source=share&utm_medium=web2x&context=3)
> Assembly isn't one language, it's a general term for any human-readable representation of a processor's ISA. There are many assembly languages, and there are even different representations of the same ISA. I'm not sure what your book you're using but there are operand order differences between AT&T and Intel x86 (although your example looks like AT&T). You shouldn't be using ChatGPT for any subject you aren't already familiar with though, or you won't be able to recognize when it's hallucinating, or even when it's simply lacking context. Just use a normal, reputable resource like the book you're following. I recommend checking out this wikibook for free online: https://en.wikibooks.org/wiki/X86_Assembly
[u/brucehoult](https://www.reddit.com/r/asm/comments/14q5qi8/comment/jqp8rig/)
> ChatGPT makes a good attempt, but it doesn't actually understand code — ESPECIALLY assembly language, where each instruction exists in a lot of context — and will usually have some kind of bugs in anything it writes.
[u/dvof](https://www.reddit.com/r/asm/comments/105vl0v/comment/j3hn8xp/?utm_source=share&utm_medium=web2x&context=3)
> Idk why all the chatGPT comments are all downvoted, guys it is inevitable that it is going to be a standard part of our lives now. The sooner students start using it the sooner people will realize its limitations. It is a great learning tool and I use it when learning a new subject.

View File

@ -1,48 +0,0 @@
# Bash
Bash is the #7 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
✅ Bash is one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ Bash is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ Bash is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Bash is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Bash makes up 8.69 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ Bash makes up 3.01 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ Bash is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Bash is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ Bash is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Bash has 154,693 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Bash projects have had 866,313 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Bash projects have had 574,292 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Bash projects have had 3,605,350 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Bash projects have had 2,121,149 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/[deleted]](https://www.reddit.com/r/bash/comments/124h7gj/comment/jdzbtvp/?utm_source=share&utm_medium=web2x&context=3)
> chatgpt is very bad at bash. Every script that someone has posted here has had some really glaring errors, often data-destructive ones. In general for every single use-case of chatgpt (or any other generative model) unless you understand the correct output you should not trust it. You can use it to produce documents and reports or even scripts, but you should always read the output carefully and validate that what it says is correct.
[u/RandomXUsr](https://www.reddit.com/r/bash/comments/zix2am/comment/iztmsp3/?utm_source=share&utm_medium=web2x&context=3)
> I've tried getting it to write some code. Very little is useful. It still very much requires education and experience with the tools you use in order to get effective, clean, and efficient code. I had tried some python scripts, but you need to specify libraries and tools to be used, and it doesn't do that well. As it learns more, it may become better at this, but for now it's a neat toy without real world benefits
[u/stepbroImstuck_in_SU](https://www.reddit.com/r/bash/comments/123buum/comment/jduund7/?utm_source=share&utm_medium=web2x&context=3)
> This is more general advice for using chatGPT for generating bash scripts. chatGPT is a powerful tool, but it has both general and bash/linux related weaknesses. Never run script you dont understand. That is a hard pill to shallow when learning bash, but thankfully you can ask chatGPT to explain its reasoning. To be sure, open a new conversation and ask for explanation of part of the code there. You can also ask another instance for a general explanation of a new syntax or command, and then cross-check the original code. After seeing what chatGPT knows about an individual command, it doesnt hurt to quicklycheck the man-page anyway. ChatGPT is prone for using “general” syntax and flags even when some command doesnt exist. Lastly, commands can change through years and environments. Your man-pages tell you what version you have. Its a good strategy to ask if any tools already exist for the task or are build in, before asking for a bash script. For example you could script dropping your ssh-key in a remote machines .ssh-dir and then appending it to the trusted-keys file (or in folder) - or you can just use the ssh commands build in add-key option. There are a lot of tools build in to your average linux installation, and your distros repos are full of even more lightweight, trustworthy tools (as long as you stick to the official repos). If you arent exactly sure how a script behaves or if the syntax is robust, create your own test environments. You can create virtual (or real) directory structures, quickly fill them with very small files and run the script without touching your actual data. Ask chatGPT for more information (and use above steps to understand what it says). Related to the last point, pay attention to especially these aspects of any script chatGPT spews back: hardcoded paths (or less strictly, any path that isnt declared as a variable on the start of the script). If instead of a robust test environment, you just use a directory with subdirectories, hardcoded paths can escape that environment, connections outside your machine/local network: While I feel it is unlikely that chatGPT will compromise your system by opening an unsafe connection to unsafe address, the risk is worth mitigating. What if the first guy who got that address noticed its not used, and bought it to distribute malware, hoping chatGPT offers it again? But more likely problem is that you can rapidly pull a lot of data from the internet. It just opens up more doors to make a mess, modifying files in /etc, or your bootloader. You can cause all kinds of damage, including permanently disabling rights to modify the files to fix it (misconfigured privileges), making your system unbootable (fstab, grub), and just generally messing up your system. Back it up before any changes, read the man-pages twice, make small tests (and remember you usually need to reload systemd or reboot before changes take effect)

View File

@ -1,48 +0,0 @@
# C++
C++ is the #10 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
✅ C++ is one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
✅ C++ is one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
✅ C++ is one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
✅ C++ is one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ C++ makes up 192.84 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ C++ makes up 87.73 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
✅ C++ makes up 290.5 GB of the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
✅ C++ makes up 69.9 GB of the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
✅ C++ makes up 52 GB of the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
C++ has 801,823 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
C++ projects have had 2,767,540 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
C++ projects have had 2,255,179 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
C++ projects have had 9,245,881 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
C++ projects have had 5,192,579 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/RainbowWarfare](https://www.reddit.com/r/cpp/comments/172vc4q/comment/k3z07sj/?utm_source=share&utm_medium=web2x&context=3)
> I use ChatGPT for tools and libs where the documentation is horrendous and its a coin toss as to whether it confidently talks truth or nonsense. I dont think its a good idea for beginners to be leaning on it as a teaching aid.
[u/TheBrainStone](https://www.reddit.com/r/cpp/comments/172vc4q/comment/k3z96kd/?utm_source=share&utm_medium=web2x&context=3)
> My experience with ChatGPT is that it sucks ass with C++. Anything beyond basic syntax and programming it just gets wrong. My typical interaction is to ask it something specific, then spend the next 3 queries clarifying and then the next few pointing out issues in the code or methodology. I cannot recommend.
[u/Asleep-Dress-3578](https://www.reddit.com/r/cpp/comments/172vc4q/comment/k3zprne/?utm_source=share&utm_medium=web2x&context=3)
> I have github copilot enabled in my ide, so whatever it suggests I can either use it or ignore. I find it helpful in writing docstrings and filling out somewhat repetitive rows (e.g. pattern matching cases). But otherwise it is not that clever. I also use chatgpt in some rare cases when I am curious how would chatgpt solve this or that problem. It is good to write some simple, short functions; but it is not reliable enough to write medium to very complex algorithms.

View File

@ -1,48 +0,0 @@
# C
C is the #11 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ C is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ C is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ C is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ C is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ C makes up 222.88 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ C makes up 183.83 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
✅ C makes up 48.9 GB of the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ C is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
✅ C makes up 55 GB of the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
C has 400,941 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
C projects have had 1,300,955 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
C projects have had 1,285,709 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
C projects have had 5,240,188 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
C projects have had 3,741,913 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/MyuuDio](https://www.reddit.com/r/C_Programming/comments/17rzzy9/comment/k8mqxv5/)
> Hard agree with the last part. ChatGPT & other AI tools can be pretty awful for non-trivial C code. It often spits out things that might work in other syntactically similar C-style, such as using string literals as switch cases, or concatenating string literals with the + operator. It's the worst nightmare for someone who's actively learning to code; it will confidently answer your question incorrectly, while sounding completely reasonable.
[u/aghast_nj](https://www.reddit.com/r/C_Programming/comments/178cc4l/comment/k4z9cby/?utm_source=share&utm_medium=web2x&context=3)
> ChatGPT is failing you twice. First, because it's telling you about a bogus problem. Second, because it is not telling you about a real problem. The bogus problem is the redeclaration issue. It's technically correct that you will get a diagnostic if you try to define the same local variable twice in the same scope. But the solution there is trivial: don't define it, just re-use it. The more pernicious problem is handling or not handling the failure of realloc. When you overwrite the list variable with the result of realloc there is the possibility that the result is NULL. In that case, you have "lost" your original pointer.
[u/Meatball_Subzero](https://www.reddit.com/r/C_Programming/comments/16geaal/comment/k078frr/?utm_source=share&utm_medium=web2x&context=3)
> I've been using copilot for nearly two years now. For me it's just a nice auto complete. I don't think it ever solves anything for me. It just makes me faster, especially with repetitive shit.

View File

@ -1,48 +0,0 @@
# Clojure
Clojure is the #36 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ Clojure is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ Clojure is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ Clojure is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Clojure is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Clojure is included in [The Stack dataset](https://arxiv.org/abs/2211.15533)
❌ Clojure is not included in the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ Clojure is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Clojure is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ Clojure is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Clojure has 17,630 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Clojure projects have had 112,757 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Clojure projects have had 84,128 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Clojure projects have had 518,359 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Clojure projects have had 272,970 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/noprompt](https://www.reddit.com/r/Clojure/comments/148nhuj/comment/jo2z2n8)
> I've been using Copilot since December 2022. It sucks for Clojure but can be great for other languages like Python, JavaScript, SQL, etc. if you know how to prompt it. As other have mentioned, Copilot excels at reducing boilerplate and picking up on patterns. For example, lets say there is a table of data in a markdown document and you want to convert it to a vector of maps. You can copy/paste the markdown table into your buffer as a comment and just start writing the data structure you want it to be, Copilot will figure it out and complete it. Its also useful for generating random utility functions. Recently in JavaScript, I typed `function lerp` (linear interpolation) and it pretty quickly filled it in. I had an array of hex color values that I wanted to be RGB and I wanted to double the number of values by interpolating between them. All I had to do was type that in a comment and wait a second before it gave me a working rough draft of the function. Copilot can actually do a lot of these things for Clojure but when I was trying to use it I found myself consistently having to fix issues with delimiters, typically round braces. Eventually, I just gave up on it. Maybe I'll give it another shot when Copilot-X releases. ChatGPT is much more useful for Clojure than Copilot. It does hallucinate and get some things wrong but overall its awesome for generating documentation, explaining code, translating diffs into PR notes, and exploring ideas. I've found it very useful for random Java questions and then translating the answers into mostly working Clojure code. These things are handy tools and have quirks but they're going to get better. It's a great time to be a cosmopolitan (polyglot) programmer.
[waffletower](https://news.ycombinator.com/item?id=35803856)
> No Clojure. No Julia. No Haskell. No Racket. No Scheme. No Common Lisp. No OCaml. And, as much as I despise Microsoft, No C#. No F#. No Swift. No Objective-C. No Perl. No Datalog. A glaringly lacking choice of languages.
[@EricTheTurner](https://x.com/EricTheTurner/status/1600344406166380544?s=20)
> FizzBuzz was once a common programming exercise used for screening software developers (maybe it still is?) I told chatGPT to "Write an efficient fizz buzz function in Clojure".

View File

@ -1,48 +0,0 @@
# C#
C# is the #9 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
✅ C# is one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
✅ C# is one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
✅ C# is one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ C# is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ C# makes up 128.37 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ C# makes up 36.83 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
✅ C# makes up 38.4 GB of the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ C# is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
✅ C# makes up 21 GB of the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
C# has 1,606,619 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
C# projects have had 1,191,927 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
C# projects have had 1,489,756 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
C# projects have had 4,581,919 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
C# projects have had 2,521,561 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/Slypenslyde](https://www.reddit.com/r/csharp/comments/1768d7o/comment/k4kguvf/?utm_source=share&utm_medium=web2x&context=3)
> AI tools give me the code I need maybe 20% to 40% of the time. Another 30% or so I have to tweak it to make it work. For the remaining percentages what it spits out needs so many changes it's easier to write it myself than expect that I tweaked it without mistakes. Sometimes it feels like CoPilot might slow me down since now I tend to hit a new line and wait 2-3 seconds to see what it suggests.
[u/telewebb](https://www.reddit.com/r/csharp/comments/1768d7o/comment/k4kod5z/?utm_source=share&utm_medium=web2x&context=3)
> I haven't found any in IDE plug-in that's been all that great. I've used copilot in conjunction with chatGPT and find myself using chatGPT way more than copilot. Keep in mind I use LLMs more as an enhanced search engine than a code writer. For code, I find it helpful to get a second opinion on a refactor, handing over error messages, writing one liners for some logic, and handing over a file to act as a second pair of eyes for what I can't see. Outside of code, I use it as a rubber ducky that can talk back when trying to think through some problems. Though tbh, the act of thinking about my problem and structuring it out to a prompt often solves my problem before I even hit send. Actually, now that I think about it. The damn thing has been a God send for writing and debugging terraform.
[u/quebecbassman](https://www.reddit.com/r/csharp/comments/1768d7o/comment/k4kgylh/?utm_source=share&utm_medium=web2x&context=3)
> Call me old, but I prefer to code things myself. AI is good to give you hints and steer you in the right direction. It can also write a lot of bullshit that looks like legit code. Then, debugging code that you didn't write gets very difficult. Remember that you write code once, but will read it many, many times. Have your boss pay for training.

View File

@ -1,48 +0,0 @@
# CSS
CSS is the #2 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ CSS is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ CSS is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ CSS is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ CSS is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ CSS makes up 145.33 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ CSS makes up 22.67 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ CSS is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ CSS is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ CSS is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
CSS has 800,588 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
CSS projects have had 443,082 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
CSS projects have had 436,767 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
CSS projects have had 4,314,244 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
CSS projects have had 1,673,966 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/Kthulu666](https://www.reddit.com/r/css/comments/zudl9x/comment/j1ikchb/?utm_source=share&utm_medium=web2x&context=3)
> I'm not sure how it could help learn. I spent a little while messing with it and trying to generate some html/css/js for a simple responsive hamburger menu. Results were mixed. It got me most of the way there, but had trouble really putting it all together into one menu that worked as intended. I could have spent more time trying to manipulate it, but that would've taken more time that it would have to make the thing by hand. On some level it's just google with extra steps since you need to check and verify everything it outputs. I found that Lucas from LTT had a good assessment of it: it's usually pretty good, but when it's wrong, it's confidently wrong. I think it would be a crappy teaching aid since the student doesn't immediately recognize when the bot is wrong or why the code it produced doesn't work.
[u/ipromiseimnotakiller](https://www.reddit.com/r/css/comments/17gcln8/comment/k6g1esr/?utm_source=share&utm_medium=web2x&context=3)
> I use chatgpt daily and it works wonders, if you know what you're reading. Otherwise, if you don't know something as a complete beginner and take chatgpt response as gospel, you're gonna be in a world of hurt when it starts lying to you giving 3 year old outdated information..
[u/cryothic](https://www.reddit.com/r/css/comments/16owij3/comment/k1tjfqg/?utm_source=share&utm_medium=web2x&context=3)
> In that case it's great. And I like ChatGPT too. But a complete beginner doesn't see possible flaws in the solution. So there is the possibility they learn a bad practice. I use ChatGPT too sometimes, but you will need to look at the code. Don't just copy and paste.

View File

@ -1,48 +0,0 @@
# Dart
Dart is the #19 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ Dart is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
✅ Dart is one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ Dart is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Dart is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Dart is included in [The Stack dataset](https://arxiv.org/abs/2211.15533)
❌ Dart is not included in the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ Dart is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Dart is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ Dart is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Dart has 91,732 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Dart projects have had 171,518 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Dart projects have had 241,706 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Dart projects have had 230,340 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Dart projects have had 264,888 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/eibaan](https://www.reddit.com/r/dartlang/comments/142fbkc/comment/jnoc1ph/?utm_source=share&utm_medium=web2x&context=3)
> The amazing thing about LLMs like ChatGPT is that they develop a kind of "language sense" and "know" how to stick together the right tokens to achieve a certain goal. They don't "understand" Dart - or any other programming language. They just emit tokens that I probably want to see :) Also, we cannot fully comprehend the amount of data that has been processed. Billions and billions of lines of code in dozens if not hundreds of languages.
[u/Rusty-Swashplate](https://www.reddit.com/r/dartlang/comments/10yiu7d/comment/j7yflw0/?utm_source=share&utm_medium=web2x&context=3)
> Please note that ChatGPT is not sure about anything. It communicates that it knows what it says is true, but it's known to make up facts. Luckily the answer to your question is in the Dart docs. Alternatively StackOverflow has a sensible answer: https://stackoverflow.com/questions/57936263/dart-set-from-vs-set-of
[u/john2046](https://www.reddit.com/r/dartlang/comments/1390c2j/comment/jj0spnc/?utm_source=share&utm_medium=web2x&context=3)
> antastic recommendations. I actually did have ChatGPT help me override toString for a ton of these classes nested within classes in this giant object I'm trying to print so I can mock. Didn't think to tweak the toString method like that. Not sure I understand your quoted getter though with the slashes. I'll play around with it Monday though.

View File

@ -1,48 +0,0 @@
# Delphi
Delphi is the #27 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ Delphi is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ Delphi is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ Delphi is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Delphi is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
❌ Delphi is not included in [The Stack dataset](https://arxiv.org/abs/2211.15533)
❌ Delphi is not included in the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ Delphi is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Delphi is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ Delphi is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Delphi has 51,475 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Delphi projects have had 310 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Delphi projects have had 0 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Delphi projects have had 552 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Delphi projects have had 0 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/EasywayScissors](https://www.reddit.com/r/delphi/comments/wnhk9x/psa_github_copilot_works_with_delphi/?utm_source=share&utm_medium=web2x&context=3)
> PSA: GitHub Copilot works with Delphi
[Marco Geuze](https://gdksoftware.com/knowledgebase/delphi-and-chatgpt)
> As you can see, it is possible to use an AI for simple pieces of code to create basic Delphi code quickly. We can now go one step further and implement this in Delphi itself.
[u/sysrpl](https://www.reddit.com/r/delphi/comments/1006ybh/programming_pascal_using_an_ai_chatbot/?utm_source=share&utm_medium=web2x&context=3)
> I asked a series of Pascal programming questions to an AI chatbot system while testing its abilities, and the following page is a record of its responses.

View File

@ -1,48 +0,0 @@
# Elixir
Elixir is the #30 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ Elixir is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ Elixir is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ Elixir is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Elixir is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Elixir is included in [The Stack dataset](https://arxiv.org/abs/2211.15533)
❌ Elixir is not included in the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ Elixir is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Elixir is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ Elixir is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Elixir has 9,510 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Elixir projects have had 113,018 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Elixir projects have had 65,166 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Elixir projects have had 255,430 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Elixir projects have had 210,145 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/a3th3rus](https://www.reddit.com/r/elixir/comments/16vrhr6/comment/k2xel5z/?utm_source=share&utm_medium=web2x&context=3)
> One day, I needed to implement a priority queue with amortized O(log n) decrease-key operation in Elixir, but I didn't know how, so I consulted Monica (which interfaces GPT-3, I think), and it gave me the code of a whole Elixir module that is absolutely wrong. It was a binary heap implemented using a single list as if it's a mutable array. Furthermore, it won't even compile! I tried to correct the "mistake" GPT made, so I told it more about Elixir, about immutability, about lists in Elixir. I even tried to "inspire" GPT to write other kinds of heaps, like binomial heap and pairing heap, but GPT is so stubborn (though very polite) that it keeps giving me almost the same code over and over again. At last I gave up on GPT and turned to StackOverflow, and just a few words enlightened me (FYI, it's two heaps, one for insertion, one for deletion, and when the top nodes in both heaps have the same key, cancel them out). My conclusion is: AI is useless in some domains when it doesn't have enough learning material in those domains.
[u/erlangsolutions](https://www.reddit.com/r/elixir/comments/13xeh8w/how_chatgpt_improved_my_elixir_code_some_hacks/)
> Using ChatGPT when programming with Elixir can bring several advantages. One of the most significant advantages is that it can provide quick and accurate responses to various programming queries, including syntax and documentation. This can help programmers save time and improve their productivity. Additionally, ChatGPT can offer personalised and adaptive learning experiences based on individual programmers skill levels and preferences. This can help programmers learn Elixir more efficiently and effectively.
[D4no0](https://elixirforum.com/t/get-ai-code-generation-tools-to-create-correct-elixir-code-or-else/53931/2)
> The question is: how much boilerplate code do you really write? Elixir compared to other languages has little to none boilerplate, and for moments such as phoenix things, there are configurable generators. I wouldnt want an AI incapable of problem solving to generate complex code for me, because as tempting as it seems, the productivity decreases a lot if we talk about refactoring generated code compared to creating your own new code.

View File

@ -1,48 +0,0 @@
# Erlang
Erlang is the #38 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ Erlang is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ Erlang is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ Erlang is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Erlang is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Erlang is included in [The Stack dataset](https://arxiv.org/abs/2211.15533)
❌ Erlang is not included in the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ Erlang is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Erlang is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ Erlang is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Erlang has 9,621 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Erlang projects have had 70,890 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Erlang projects have had 49,786 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Erlang projects have had 249,209 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Erlang projects have had 127,120 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/Ranugad](https://www.reddit.com/r/erlang/comments/11kl57z/comment/jbbw94t)
> It seems like ChatGPT doesn't know that much Erlang.
[Rich_Morin](https://elixirforum.com/t/asking-chatgpt-to-translate-erlang-to-elixir/53548)
> I recently asked ChatGPT to translate some Erlang code into Elixir. Heres an edited transcript, for your amusement and edification…
[u/boy-griv](https://www.reddit.com/r/AskProgramming/comments/10tave8/comment/j78bvj5)
> I dont think anything automated is going to work well. ChatGPT might be interesting but youll almost certainly have to fix it up quite a bit. https://learnxinyminutes.com/docs/erlang/ gives a quick rundown on erlang syntax/semantics and https://learnyousomeerlang.com/ is a good book on it

View File

@ -1,48 +0,0 @@
# GDScript
GDScript is the #33 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ GDScript is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ GDScript is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ GDScript is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ GDScript is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ GDScript is included in [The Stack dataset](https://arxiv.org/abs/2211.15533)
❌ GDScript is not included in the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ GDScript is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ GDScript is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ GDScript is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
GDScript has 906 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
GDScript projects have had 561 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
GDScript projects have had 1,615 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
GDScript projects have had 3,692 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
GDScript projects have had 9,953 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/Merosian](https://www.reddit.com/r/godot/comments/17nv29g/comment/k7w2nrx/?utm_source=share&utm_medium=web2x&context=3)
> Irrational AI hatred aside, none afaik, godot 4 is too new. When trying to figure out some kinks in my code it kept giving me garbage mixed with outdated godot 3 code. Don't bother, it's faster to just do it yourself for now. It's kind of annoying because in my experience for beginner devs, AI can be a huge help in explaining why your code no worky and how to improve it. It allowed me to go much further in my C++ projects that I thought and saved a ton of time spent on research or debugging.
[u/[deleted]](https://www.reddit.com/r/godot/comments/zf6tve/comment/izaiw13/?utm_source=share&utm_medium=web2x&context=3)
> I was playing with this yesterday and had some difficulty getting it to produce GDScript instead of Python. It insisted the Python code it generated was GDScript haha. Otherwise it made exactly what I wanted, just in the wrong language.
[u/kmouratidis](https://www.reddit.com/r/godot/comments/16j7u9k/comment/k0odex1/?utm_source=share&utm_medium=web2x&context=3)
> You can: Fine-tune ChatGPT. If you're willing to pay, I might help setting it up at some point in the near future. Use a different model like Llama-2 (open-source) which has more recent data (which you can also fine-tune), or from companies like Anthropic/Claude etc. Look (and contribute?) to godot-dodo and Godot Copilot. Export the Godot docs to a PDF and use some plugin, I guess? Never tried it. Copy-paste the GDScript reference page, which will likely improve it's zero-shot predictions.

View File

@ -1,112 +0,0 @@
import csv
data = {}
with open('languages.csv', 'r') as file:
reader = csv.DictReader(file)
for row in reader:
data = row
break
language = data['language']
stack_overflow_ranking = data['so_2023_language_rank']
introduction = f'''# {language}
Recently, many folks have been claiming that their LLM is the best at coding. Their claims are typically based off self-reported evaluations on the [HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=most%20common%20benchmarks-,1.%20HumanEval,-Creator%3A%20OpenAI). But when you look into that benchmark, you realize that *it only consists of 164 Python programming problems.*
This led me down a rabbit hole of trying to figure out how helpful LLMs actually are with different programming, scripting, and markup languages. I am estimating this for each language by reviewing LLM code benchmark results, public LLM dataset compositions, available GitHub and Stack Overflow data, and anecdotes from developers on Reddit. Below you will find what I have figured out about {language} so far.
**Do you have any feedback or perhaps some anecdotes about using LLMs with {language} to share?**
---
'''
stack_overflow = f'''{language} is the #{stack_overflow_ranking} most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).\n\n'''
benchmarks = "## Benchmarks\n\n"
if data["multiple"] == "N/A":
multiple = f'''{language} is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)\n\n'''
else:
multiple = f'''{language} is one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)\n\n'''
if data["babel"] == "N/A":
babel = f'''{language} is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)\n\n'''
else:
babel = f'''{language} is one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)\n\n'''
if data["mbxp"] == "N/A":
mbxp = f'''{language} is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)\n\n'''
else:
mbxp = f'''{language} is one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)\n\n'''
if data["humaneval_x"] == "N/A":
humaneval_x = f'''{language} is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)\n\n'''
else:
humaneval_x = f'''{language} is one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)\n\n'''
datasets = "## Datasets\n\n"
if data["stack_gb"] == "0":
stack = f'''{language} is not included in [The Stack dataset](https://arxiv.org/abs/2211.15533)\n\n'''
else:
stack = f'''{language} makes up {data["stack_gb"]} GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)\n\n'''
if data["codeparrot_gb"] == "0":
codeparrot = f'''{language} is not included in the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)\n\n'''
else:
codeparrot = f'''{language} makes up {data["codeparrot_gb"]} GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)\n\n'''
if data["alphacode_gb"] == "0":
alphacode = f'''{language} is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)\n\n'''
else:
alphacode = f'''{language} makes up {data["alphacode_gb"]} GB of the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)\n\n'''
if data["codegen_gb"] == "0":
codegen = f'''{language} is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)\n\n'''
else:
codegen = f'''{language} makes up {data["codegen_gb"]} GB of the [CodeGen dataset](https://arxiv.org/abs/2203.13474)\n\n'''
if data["polycoder_gb"] == "0":
polycoder = f'''{language} is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)\n\n'''
else:
polycoder = f'''{language} makes up {data["polycoder_gb"]} GB of the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)\n\n'''
presence = f'''## Stack Overflow & GitHub presence
{language} has {data["so_tags"]} [tags on Stack Overflow](https://stackoverflow.com/tags)
{language} projects have had {data["github_prs"]} [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
{language} projects have had {data["github_issues"]} [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
{language} projects have recieved {data["github_pushes"]} [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
{language} projects have recieved {data["github_stars"]} [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
'''
anecdotes = f'''## Anecdotes from developers
[{data["anecdote_1_author"]}]({data["anecdote_1_url"]})
> {data["anecdote_1_content"]}
[{data["anecdote_2_author"]}]({data["anecdote_2_url"]})
> {data["anecdote_2_content"]}
[{data["anecdote_3_author"]}]({data["anecdote_3_url"]})
> {data["anecdote_3_content"]}
---
'''
conclusion = f'''Original source: https://github.com/continuedev/continue/tree/main/docs/docs/languages/{language.lower()}.md
Data for all languages I've looked into so far: https://github.com/continuedev/continue/tree/main/docs/docs/languages/languages.csv
'''
content = introduction + stack_overflow + benchmarks + multiple + babel + mbxp + humaneval_x + datasets + stack + codeparrot + alphacode + codegen + polycoder + presence + anecdotes + conclusion
with open(f'{language.lower()}.md', 'w') as f:
f.write(content)

View File

@ -1,48 +0,0 @@
# Go
Go is the #14 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
✅ Go is one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
✅ Go is one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
✅ Go is one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
✅ Go is one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Go makes up 118.37 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ Go makes up 19.28 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
✅ Go makes up 19.8 GB of the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
✅ Go makes up 21.4 GB of the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
✅ Go makes up 15 GB of the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Go has 71,541 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Go projects have had 2,642,302 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Go projects have had 1,815,979 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Go projects have had 4,859,219 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Go projects have had 7,318,078 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/vEncrypted](https://www.reddit.com/r/golang/comments/16cs5md/comment/jzl928k/?utm_source=share&utm_medium=web2x&context=3)
> Personally for me this is the completely wrong approach. Having the ai write it for you and then understand what it wrote is less than optimal. You should use chatgpt to ask questions, not write code if you dont understand it. Use it as a mentor who cant be busy to answer your questions. Not as someone who will complete your homework and then maybe youll try and understand it afterwards. If a student actually wants to learn a subject, do they get someone to complete their homework? You get what I mean? If your goal is to just complete a project in anyway. Then maybe might work but most likely wont. You should understand and come up with the logic behind everything you write before letting ai write it for you. Copilot is good for predictable sequences, but most things logic wise it fails as it does not know implementation.
[u/DarkCeptor44](https://www.reddit.com/r/golang/comments/17okcs8/comment/k7zl74p/?utm_source=share&utm_medium=web2x&context=3)
> ChatGPT (mainly the UI) set a bad example, AI has been way more helpful to me for learning Go than going on Google or reading official docs, but not ChatGPT and rather Forefront, which can use GPT 3.5/4 or their own models but regardless they have a Internet Search function that uses the model to simply summarize dozens of actually real pages it found in a way that is easier for me to understand compared to the original, specially since I can keep chain-asking "what is this/what is that", and all from me explaining step-by-step with "janky" English and the full code. It also lists the pages it used so I can just click them and check it myself, (spoiler alert) it doesn't make as many mistakes as people think, even without search it does a great job understanding code, it won't usually solve more than basic problems and just keeps giving you different snippets to try but most of the time I end up fixing the issue because of the answers, even if the code doesn't work, I don't know how else to explain it. Of course my first language isn't English but I also learn almost entirely by example and docs don't usually have snippets for every little thing the code can do, it also sounds a bit advanced to me because it's just a lot of text with (programming/Go) terms that I usually don't use.
[u/Prestigiouspite](https://www.reddit.com/r/golang/comments/153pahy/comment/jsmdut2/?utm_source=share&utm_medium=web2x&context=3)
> When I ask ChatGPT about it, it suggests model.go, view.go, controller.go etc. but says itself that the MVC concept does not exist in Go. So I'm interested how developer with some more experience than I in desktop apps would struct it.

View File

@ -1,48 +0,0 @@
# Groovy
Groovy is the #26 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ Groovy is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ Groovy is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ Groovy is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Groovy is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Groovy is included in [The Stack dataset](https://arxiv.org/abs/2211.15533)
❌ Groovy is not included in the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ Groovy is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Groovy is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ Groovy is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Groovy has 30,014 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Groovy projects have had 132,381 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Groovy projects have had 108,265 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Groovy projects have had 431,291 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Groovy projects have had 140,122 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[Figaf](https://figaf.com/chatgpt-groovy-code-help-for-sap-cloud-integration/)
> And that it was possible to use the code created by the tool to generate some code that could be used to start your programming. This could save quite a bit of time for developers to use this as a starting point, and you dont need to have a large experience to start coding in UDFs in Groovy. It is also interesting that it has much knowledge about what is going on in an SAP universe, I would have thought it was more difficult to get data about it.
[u/West_Performance_129](https://www.reddit.com/r/groovy/comments/16kuh6s/comment/k1i0lqn/)
> Groovy is a great language with a ton of utility, and can scale like crazy! Write code as dynamic as you want, and choose to refactor into a more type-safe manner later. It's totally worth learning and having it in your toolkit. I program in it every day for many projects. All Java (99.9%) is also valid Groovy, so it's almost impossible not to understand and work with any Java code base you may come across once you get familiar with Groovy. ChatGPT and Github Co-pilot also write excellent Groovy code, which can aid you in learning, and just programming with it in general. It's still actively maintained, too! It's not going away an time soon.
[Jamon Holmgren](https://shift.infinite.red/getting-the-most-from-github-copilot-8f7b32014748)
> When I was building react-native-colo-loco, I had to write a Gradle script, which is written in Groovy. I know a little Groovy, but not much. So I focused on writing precise, accurate comments, and let Copilot suggest lines of code. I could then lean on my development experience to pick up on patterns and syntax, and go from there.

View File

@ -1,48 +0,0 @@
# Haskell
Haskell is the #32 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ Haskell is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
✅ Haskell is one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ Haskell is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Haskell is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Haskell makes up 6.95 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ Haskell makes up 1.85 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ Haskell is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Haskell is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ Haskell is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Haskell has 50,979 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Haskell projects have had 106,539 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Haskell projects have had 146,857 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Haskell projects have had 646,012 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Haskell projects have had 306,235 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/lgastako](https://www.reddit.com/r/haskell/comments/zede58/comment/iz68s9c/?utm_source=share&utm_medium=web2x&context=3)
> I've been generating a ton of Haskell code with it and it's been fantastic. I have a driver for content addressable storage in my side project, it's pretty simple, but it still took me a few hours each to implement local filesystem and MinIO drivers with tests and ChatGPT did the bulk of the work for Redis and LevelDB implementations in minutes. I've also found it much easier to work with on Haskell code than on python or JS. Obviously some of this is the usual reasons why I would find Haskell code easier to deal with than dynamic languages but I think that the effect is amplified with ChatGPT because the "if it compiles it works" affect gives me much more confidence that what it generated isn't missing anything important than with the other languages, so I can move much faster.
[u/qqwy](https://www.reddit.com/r/haskell/comments/16o5u8e/comment/k1jc68v/?utm_source=share&utm_medium=web2x&context=3)
> Personally, I've been using Copilot mostly in Ruby (work...) and Haskell, and it is much better at predicting Haskell code. I think it's because Haskell has so much context (type signatures, purity, only imported modules are in scope) which greatly restrict what you can do in a particular function and thus Copilot's suggestions seem to be much more often in line with what I wanted to write.
[Chris Smith](https://cdsmithus.medium.com/pair-programming-with-chatgpt-haskell-1c4490b71da6)
> Here, I present the (lightly edited) story of using ChatGPT conversationally to solve a non-trivial problem in Haskell. It definitely gets some things wrong, and its still unclear whether co-developing this with ChatGPT made anything easier than it would have been otherwise. But in any case, it was definitely a different and less lonely experience than just programming on my own.

View File

@ -1,48 +0,0 @@
# HTML
HTML is the #2 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ HTML is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ HTML is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ HTML is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ HTML is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ HTML makes up 746.33 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ HTML makes up 118.12 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ HTML is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ HTML is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ HTML is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
HTML has 1,183,299 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
HTML projects have had 1,140,227 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
HTML projects have had 786,699 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
HTML projects have had 7,284,841 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
HTML projects have had 2,055,453 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/cryothic](https://www.reddit.com/r/css/comments/16owij3/comment/k1tjfqg/)
> i actually used chatgpt to some extent but it doesn't help more than giving directions. i could come up with a fairly okay layout with objects and movement with chatgpt but it doesn't do much more than that
[u/russlo](https://www.reddit.com/r/HTML/comments/11rb46v/comment/jc7yd4f/?utm_source=share&utm_medium=web2x&context=3)
> ChatGPT is up to date as of 2021. That means that any information you get from it is already 2 years out of date. For fast moving languages like Golang, JavaScript, TypeScript, Rust, etc., that's too old. I've been able to make use of it because I have questions about setting up servers and how to refactor old Perl code, but other than that it's just not ready for primetime, yet, IMHO.
[u/steelfrog](https://www.reddit.com/r/HTML/comments/17knwvb/comment/k7943pw/?utm_source=share&utm_medium=web2x&context=3)
> One thing about ChatGPT is that it names its IDs and classes very specifically and rarely uses element-level styles. In my experience, it will give an element a class even if it's the only one on the page. I'm not sure if this practice differs based on the version.

View File

@ -1,48 +0,0 @@
# Java
Java is the #8 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
✅ Java is one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
✅ Java is one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
✅ Java is one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
✅ Java is one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Java makes up 271.43 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ Java makes up 107.7 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
✅ Java makes up 113.8 GB of the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
✅ Java makes up 120.3 GB of the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
✅ Java makes up 41 GB of the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Java has 1,911,018 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Java projects have had 3,939,936 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Java projects have had 3,752,951 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Java projects have had 14,008,719 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Java projects have had 9,232,281 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/ByerN](https://www.reddit.com/r/java/comments/163eltc/comment/jy2asuq/?utm_source=share&utm_medium=web2x&context=3)
> Anyone who tried to use ChatGPT to solve some real-world programming issues knows, that even if you are able to replace 1-2 juniors with it, you will lose 1 senior to filter out the nonsense it can produce with full confidence. Not worth it. What's worse - I've seen many beginners treating AI as some form of oracle and believing everything it spits out even if it's all false. But AI is a powerful tool and it's worth checking it out and tracking its progress. Who knows what it will look like in a few years?
[u/benjtay](https://www.reddit.com/r/java/comments/16lu4wb/comment/k14rnx3/?utm_source=share&utm_medium=web2x&context=3)
> I have to wonder if AI translation is determinate. I use Github Copilot fairly often, and it returns schizophrenic suggestions apparently at random. It also seems stuck in pre Java-8 for syntax (I've never seen it use switch expressions, and it rarely uses streams).
[u/BarryFruitman](https://www.reddit.com/r/java/comments/176t5vb/comment/k4rwd2t/?utm_source=share&utm_medium=web2x&context=3)
> I've been using GitHub Copilot with Android Studio for a couple of months. It's actually amazing. It doesn't produce a ton of suggestions, but the ones it does produce are right a lot of the time. Even the wrong ones are often pretty close and only need minor editing. It won't write full classes but it can write short methods or blocks of code. Highly recommend.

View File

@ -1,48 +0,0 @@
# JavaScript
JavaScript is the #1 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
✅ JavaScript is one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
✅ JavaScript is one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
✅ JavaScript is one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
✅ JavaScript is one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ JavaScript makes up 486.2 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ JavaScript makes up 87.82 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
✅ JavaScript makes up 88 GB of the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
✅ JavaScript makes up 24.7 GB of the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
✅ JavaScript makes up 22 GB of the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
JavaScript has 2,518,260 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
JavaScript projects have had 6,390,411 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
JavaScript projects have had 6,753,636 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
JavaScript projects have had 22,397,798 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
JavaScript projects have had 23,751,668 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/Ok-Hospital-5076](https://www.reddit.com/r/javascript/comments/17o0p9o/comment/k7xhnws/?utm_source=share&utm_medium=web2x&context=3)
> ChatGPT for faster and consise search results and thats all .Co Pilot isn't my cup of tea.
[u/andeee23](https://www.reddit.com/r/javascript/comments/17o0p9o/comment/k7ww57w/?utm_source=share&utm_medium=web2x&context=3)
> i use chat gpt occasionally instead of google, its ok for some small specific functions but it just saves me 10 minutes here and there. i can definitely imagine my life without it since i often forget it exists
[u/alphabet_american](https://www.reddit.com/r/javascript/comments/17o0p9o/comment/k7wvdjl/?utm_source=share&utm_medium=web2x&context=3)
> I use copilot for autocompletion and chatgpt as sort of a "documentation oracle". gpt4 gives "ok" code, but it where it really shines is asking it to explain something or write a simple implementation.

View File

@ -1,48 +0,0 @@
# Julia
Julia is the #37 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
✅ Julia is one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
✅ Julia is one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ Julia is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Julia is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Julia makes up 3.09 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ Julia makes up 0.29 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ Julia is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Julia is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ Julia is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Julia has 12,402 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Julia projects have had 39,305 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Julia projects have had 51,276 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Julia projects have had 166,898 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Julia projects have had 52,326 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/LoganKilpatrick1](https://www.reddit.com/r/Julia/comments/zzvkso/comment/j2i6knx/)
> I usually start my own articles with ChatGPT but the truth is that right now, if you want to say something interesting in the Julia space, you mostly need to write it yourself since the volume of content about Julia out there isnt enough for the outputs of ChatGPT to be very useful since our ecosystem is so small.
[u/Kichae](https://www.reddit.com/r/Julia/comments/112wlle/comment/j8mpgx5/)
> It wasn't trained on sufficient Julia code. As with any machine learning model, ChatGPT is only able to regurgitate what's been fed into it. Also, this behaviour happens with basically every other topic, too. LLMs work by trying to predict what the next word in a sentence would be based on the previous string of words. If a sentence is incomplete, it's going to add a next word. That word is going to be whichever has the highest confidence score, regardless of low that score may actually be. This results in it just making shit up, but often shit that sounds plausible. We've seen CGPT invent academic articles, books, and even entire people because it makes sense to in the sentence it's generating.`
[u/Paravalis](https://www.reddit.com/r/Julia/comments/112wlle/comment/j8qzc0j/)
> I suspect the current language model behind ChatGPT was fed with a lot of code examples from Stack Exchange, but the Julia community mainly uses Discourse instead, which probably wasn't in the training set: https://discourse.julialang.org/

View File

@ -1,48 +0,0 @@
# Kotlin
Kotlin is the #16 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ Kotlin is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ Kotlin is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
✅ Kotlin is one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Kotlin is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Kotlin is included in [The Stack dataset](https://arxiv.org/abs/2211.15533)
❌ Kotlin is not included in the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ Kotlin is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Kotlin is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ Kotlin is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Kotlin has 92,664 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Kotlin projects have had 346,824 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Kotlin projects have had 174,810 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Kotlin projects have had 816,744 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Kotlin projects have had 545,403 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/Feztopia](https://www.reddit.com/r/Kotlin/comments/zo6jpo/comment/j0lv16b/?utm_source=share&utm_medium=web2x&context=3)
> chatgpt doesn't know that Kotlin can use java libraries, which makes sense since it knows nothing. Chatgpt doesn't know that you target older Android versions with new languages. The reason why there are more java programs for old reasons is just historical and doesn't benefit java in any way. But chatgpt will never understand this since it can't understand anything. Here chatgpt is correct. It's amazing how it can produce a correct answer without having any idea what it's doing.
[u/duongdominhchau](https://www.reddit.com/r/Kotlin/comments/10tzne0/comment/j79pkls/?utm_source=share&utm_medium=web2x&context=3)
> If you want solid foundation, don't. ChatGPT is known for inventing things and confidently state it as if that's true, if you don't have knowledge to judge its output, you can't fully trust the answer.
[u/LoveSpiritual](https://www.reddit.com/r/Kotlin/comments/14bpuym/comment/jokay83/?utm_source=share&utm_medium=web2x&context=3)
> Not mentioned yet, but I really believe ChatGPT and Copilot (and whatever is coming down the pike) really reduces the “learning a new language” hump for EVERY language, and definitely for Kotlin. Asking it to do idiomatic Kotlin usually produces quite good results, and asking it how to do a Java thing best in Kotlin definitely does well also. So every new Java developer will be adept at Kotlin even faster than before.

View File

@ -1,39 +0,0 @@
language,multiple,babel,mbxp,humaneval_x,so_2023_language_percent,so_2023_language_rank,so_tags,github_prs,github_pushes,github_issues,github_stars,stack_gb,codeparrot_gb,alphacode_gb,codegen_gb,polycoder_gb,subreddit_members,subreddit_url,anecdote_1_content,anecdote_1_author,anecdote_1_url,anecdote_2_content,anecdote_2_author,anecdote_2_url,anecdote_3_content,anecdote_3_author,anecdote_3_url
Erlang,N/A,N/A,N/A,N/A,0.99%,38,"9,621","70,890","249,209","49,786","127,120",Unspecified,0,0,0,0,9.5k,https://www.reddit.com/r/erlang,It seems like ChatGPT doesn't know that much Erlang.,u/Ranugad,https://www.reddit.com/r/erlang/comments/11kl57z/comment/jbbw94t,"I recently asked ChatGPT to translate some Erlang code into Elixir. Heres an edited transcript, for your amusement and edification…",Rich_Morin,https://elixirforum.com/t/asking-chatgpt-to-translate-erlang-to-elixir/53548,I dont think anything automated is going to work well. ChatGPT might be interesting but youll almost certainly have to fix it up quite a bit. https://learnxinyminutes.com/docs/erlang/ gives a quick rundown on erlang syntax/semantics and https://learnyousomeerlang.com/ is a good book on it,u/boy-griv,https://www.reddit.com/r/AskProgramming/comments/10tave8/comment/j78bvj5
Julia,✅,✅,N/A,N/A,1.15%,37,"12,402","39,305","166,898","51,276","52,326",3.09,0.29,0,0,0,23.9k,https://reddit.com/r/julia,"I usually start my own articles with ChatGPT but the truth is that right now, if you want to say something interesting in the Julia space, you mostly need to write it yourself since the volume of content about Julia out there isnt enough for the outputs of ChatGPT to be very useful since our ecosystem is so small.",u/LoganKilpatrick1,https://www.reddit.com/r/Julia/comments/zzvkso/comment/j2i6knx/,"It wasn't trained on sufficient Julia code. As with any machine learning model, ChatGPT is only able to regurgitate what's been fed into it. Also, this behaviour happens with basically every other topic, too. LLMs work by trying to predict what the next word in a sentence would be based on the previous string of words. If a sentence is incomplete, it's going to add a next word. That word is going to be whichever has the highest confidence score, regardless of low that score may actually be. This results in it just making shit up, but often shit that sounds plausible. We've seen CGPT invent academic articles, books, and even entire people because it makes sense to in the sentence it's generating.`",u/Kichae,https://www.reddit.com/r/Julia/comments/112wlle/comment/j8mpgx5/,"I suspect the current language model behind ChatGPT was fed with a lot of code examples from Stack Exchange, but the Julia community mainly uses Discourse instead, which probably wasn't in the training set: https://discourse.julialang.org/",u/Paravalis,https://www.reddit.com/r/Julia/comments/112wlle/comment/j8qzc0j/
Clojure,N/A,N/A,N/A,N/A,1.26%,36,"17,630","112,757","518,359","84,128","272,970",Unspecified,0,0,0,0,31.5k,https://www.reddit.com/r/Clojure,"I've been using Copilot since December 2022. It sucks for Clojure but can be great for other languages like Python, JavaScript, SQL, etc. if you know how to prompt it. As other have mentioned, Copilot excels at reducing boilerplate and picking up on patterns. For example, lets say there is a table of data in a markdown document and you want to convert it to a vector of maps. You can copy/paste the markdown table into your buffer as a comment and just start writing the data structure you want it to be, Copilot will figure it out and complete it. Its also useful for generating random utility functions. Recently in JavaScript, I typed `function lerp` (linear interpolation) and it pretty quickly filled it in. I had an array of hex color values that I wanted to be RGB and I wanted to double the number of values by interpolating between them. All I had to do was type that in a comment and wait a second before it gave me a working rough draft of the function. Copilot can actually do a lot of these things for Clojure but when I was trying to use it I found myself consistently having to fix issues with delimiters, typically round braces. Eventually, I just gave up on it. Maybe I'll give it another shot when Copilot-X releases. ChatGPT is much more useful for Clojure than Copilot. It does hallucinate and get some things wrong but overall its awesome for generating documentation, explaining code, translating diffs into PR notes, and exploring ideas. I've found it very useful for random Java questions and then translating the answers into mostly working Clojure code. These things are handy tools and have quirks but they're going to get better. It's a great time to be a cosmopolitan (polyglot) programmer.",u/noprompt,https://www.reddit.com/r/Clojure/comments/148nhuj/comment/jo2z2n8,"No Clojure. No Julia. No Haskell. No Racket. No Scheme. No Common Lisp. No OCaml. And, as much as I despise Microsoft, No C#. No F#. No Swift. No Objective-C. No Perl. No Datalog. A glaringly lacking choice of languages.",waffletower,https://news.ycombinator.com/item?id=35803856,"FizzBuzz was once a common programming exercise used for screening software developers (maybe it still is?) I told chatGPT to ""Write an efficient fizz buzz function in Clojure"".",@EricTheTurner,https://x.com/EricTheTurner/status/1600344406166380544?s=20
Solidity,N/A,N/A,N/A,N/A,1.33%,35,"6,669",0,0,0,350,0,0,0,0,0,17.0k,https://reddit.com/r/solidity,"ChatGPT is awful at smart contract, the data is years out of date, and it tend to override and make functions that are unnecessary. Even worse it overrides safe good functions for unsafe inefficient functions. Speaking of inefficiency it will seriously de-optimize optimized code, even when asked to gas optimize it.",u/Adrewmc,https://www.reddit.com/r/solidity/comments/142amjb/comment/jn48x8v/,"Despite the mixed results, ChatGPT, aka GPT-3.5, is a step forward in the direction of writing code with an AI assistant. I actually enjoyed doing these little experiments. However, compared to other experiments I did with JavaScript and other languages, a clear takeaway from my efforts is that when it comes to the Web3 space, GPT doesnt yet have enough accuracy. In fairness, there is far less available Solidity and Web3-related JavaScript code in the wild than there is general-purpose JavaScript code. Plus, the web3 industry is constantly changing, which makes the problems of ChatGPT relying on an old dataset much worse. . On the positive side, generating an ABI from Solidity is something it did well, which shows it can learn from the available snippets the general rules to create something new.",Lorenzo Sicilia,https://outlierventures.io/article/can-chatgpt-really-be-trusted-to-write-a-smart-contract-or-to-refactor-your-existing-solidity-code/,Can someone please make an open coder model trained on Solidity,u/thatdudeiknew,https://www.reddit.com/r/LocalLLaMA/comments/14qednx/comment/jqmq2t5/?utm_source=share&utm_medium=web2x&context=3
Lisp,N/A,N/A,N/A,N/A,1.53%,34,"6,945","8,431","73,903","12,870","47,157",Unspecified,0,0,0,0,37.7k,https://www.reddit.com/r/lisp,"Chat gpt is known to lie and be confident in its incorrectness. Also, try telling it to convert a program from lisp to python that uses advanced features like the condition system.",u/KaranasToll,https://www.reddit.com/r/lisp/comments/138aovs/comment/jixfrkr/,"How do you think the advent of ChatGPT and Copilot would affect the adoption and popularity of Common Lisp, Clojure and Schemes? On one hand, Large Language Models did not have access to these ""niche"" languages for training as much as the more popular alternatives like Python and Typescript so the quality of their output would be worse in comparison. On the other hand, the ""interactive"" aspect of LISP in that you code stuff, test in REPL and code again would not be so unique since the developer can just use the chat system to refine his solution. The other upside that LISPs had over the likes of Rust and C++ is the lack of syntax clutter and cleanness of s-expressions. In this front too, they would hurt from the likes of ChatGPT since the syntactic complexity is handled by the LLM not the developer.",u/friedrichRiemann,https://www.reddit.com/r/lisp/comments/11lwwv1/possible_effects_of_aiassisted_tools_on_lisps/?utm_source=share&utm_medium=web2x&context=3,"I'm an engineer working in the construction field, and I'm currently trying to create a Lisp routine for a project I'm working on. I've been trying to use GPT to generate the code, but I'm having some trouble getting it to work properly. I was wondering if anyone knows of a pre-trained GPT that has been specifically trained on Lisp code. I've been searching online, but I haven't had any luck so far. If anyone knows of a pre-trained GPT with Lisp, or has any tips for training my own GPT on Lisp code, I would really appreciate the help.",/u/Fine_Impression_3171,https://www.reddit.com/r/ChatGPT/comments/12o4k1n/looking_for_pretrained_gpt_with_lisp_autocad/
GDScript,N/A,N/A,N/A,N/A,1.71%,33,906,561,"3,692","1,615","9,953",Unspecified,0,0,0,0,147k,https://www.reddit.com/r/godot/,"Irrational AI hatred aside, none afaik, godot 4 is too new. When trying to figure out some kinks in my code it kept giving me garbage mixed with outdated godot 3 code. Don't bother, it's faster to just do it yourself for now. It's kind of annoying because in my experience for beginner devs, AI can be a huge help in explaining why your code no worky and how to improve it. It allowed me to go much further in my C++ projects that I thought and saved a ton of time spent on research or debugging.",u/Merosian,https://www.reddit.com/r/godot/comments/17nv29g/comment/k7w2nrx/?utm_source=share&utm_medium=web2x&context=3,"I was playing with this yesterday and had some difficulty getting it to produce GDScript instead of Python. It insisted the Python code it generated was GDScript haha. Otherwise it made exactly what I wanted, just in the wrong language.",u/[deleted],https://www.reddit.com/r/godot/comments/zf6tve/comment/izaiw13/?utm_source=share&utm_medium=web2x&context=3,"You can: Fine-tune ChatGPT. If you're willing to pay, I might help setting it up at some point in the near future. Use a different model like Llama-2 (open-source) which has more recent data (which you can also fine-tune), or from companies like Anthropic/Claude etc. Look (and contribute?) to godot-dodo and Godot Copilot. Export the Godot docs to a PDF and use some plugin, I guess? Never tried it. Copy-paste the GDScript reference page, which will likely improve it's zero-shot predictions.",u/kmouratidis,https://www.reddit.com/r/godot/comments/16j7u9k/comment/k0odex1/?utm_source=share&utm_medium=web2x&context=3
Haskell,N/A,✅,N/A,N/A,2.09%,32,"50,979","106,539","646,012","146,857","306,235",6.95,1.85,0,0,0,76.1k,https://www.reddit.com/r/haskell/,"I've been generating a ton of Haskell code with it and it's been fantastic. I have a driver for content addressable storage in my side project, it's pretty simple, but it still took me a few hours each to implement local filesystem and MinIO drivers with tests and ChatGPT did the bulk of the work for Redis and LevelDB implementations in minutes. I've also found it much easier to work with on Haskell code than on python or JS. Obviously some of this is the usual reasons why I would find Haskell code easier to deal with than dynamic languages but I think that the effect is amplified with ChatGPT because the ""if it compiles it works"" affect gives me much more confidence that what it generated isn't missing anything important than with the other languages, so I can move much faster.",u/lgastako,https://www.reddit.com/r/haskell/comments/zede58/comment/iz68s9c/?utm_source=share&utm_medium=web2x&context=3,"Personally, I've been using Copilot mostly in Ruby (work...) and Haskell, and it is much better at predicting Haskell code. I think it's because Haskell has so much context (type signatures, purity, only imported modules are in scope) which greatly restrict what you can do in a particular function and thus Copilot's suggestions seem to be much more often in line with what I wanted to write.",u/qqwy,https://www.reddit.com/r/haskell/comments/16o5u8e/comment/k1jc68v/?utm_source=share&utm_medium=web2x&context=3,"Here, I present the (lightly edited) story of using ChatGPT conversationally to solve a non-trivial problem in Haskell. It definitely gets some things wrong, and its still unclear whether co-developing this with ChatGPT made anything easier than it would have been otherwise. But in any case, it was definitely a different and less lonely experience than just programming on my own.",Chris Smith,https://cdsmithus.medium.com/pair-programming-with-chatgpt-haskell-1c4490b71da6
Objective-C,N/A,N/A,N/A,N/A,2.31%,31,"292,409","263,146","1,172,307","397,275","3,003,177",Unspecified,0,0,0,0,7.0k,https://www.reddit.com/r/ObjectiveC/,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A
Elixir,N/A,N/A,N/A,N/A,2.32%,30,"9,510","113,018","255,430","65,166","210,145",Unspecified,0,0,0,0,27.1k,https://www.reddit.com/r/elixir/,"One day, I needed to implement a priority queue with amortized O(log n) decrease-key operation in Elixir, but I didn't know how, so I consulted Monica (which interfaces GPT-3, I think), and it gave me the code of a whole Elixir module that is absolutely wrong. It was a binary heap implemented using a single list as if it's a mutable array. Furthermore, it won't even compile! I tried to correct the ""mistake"" GPT made, so I told it more about Elixir, about immutability, about lists in Elixir. I even tried to ""inspire"" GPT to write other kinds of heaps, like binomial heap and pairing heap, but GPT is so stubborn (though very polite) that it keeps giving me almost the same code over and over again. At last I gave up on GPT and turned to StackOverflow, and just a few words enlightened me (FYI, it's two heaps, one for insertion, one for deletion, and when the top nodes in both heaps have the same key, cancel them out). My conclusion is: AI is useless in some domains when it doesn't have enough learning material in those domains.",u/a3th3rus,https://www.reddit.com/r/elixir/comments/16vrhr6/comment/k2xel5z/?utm_source=share&utm_medium=web2x&context=3,"Using ChatGPT when programming with Elixir can bring several advantages. One of the most significant advantages is that it can provide quick and accurate responses to various programming queries, including syntax and documentation. This can help programmers save time and improve their productivity. Additionally, ChatGPT can offer personalised and adaptive learning experiences based on individual programmers skill levels and preferences. This can help programmers learn Elixir more efficiently and effectively.",u/erlangsolutions,https://www.reddit.com/r/elixir/comments/13xeh8w/how_chatgpt_improved_my_elixir_code_some_hacks/,"The question is: how much boilerplate code do you really write? Elixir compared to other languages has little to none boilerplate, and for moments such as phoenix things, there are configurable generators. I wouldnt want an AI incapable of problem solving to generate complex code for me, because as tempting as it seems, the productivity decreases a lot if we talk about refactoring generated code compared to creating your own new code.",D4no0,https://elixirforum.com/t/get-ai-code-generation-tools-to-create-correct-elixir-code-or-else/53931/2
Perl,✅,N/A,✅,N/A,2.46%,29,"67,938","125,129","634,214","117,426","188,697",5.5,4.7,0,0,0,16.4k,https://www.reddit.com/r/perl/,"There are a few problems with this, and I noticed the exact same thing with the GitHub Copilot. It's barfing out examples it was trained on with no idea about what they do, whether they work, and if they are current. Transaction objects no longer have a success method. This was deprecated for a long time ago and finally removed in version 9. The error method returns a single value. Minor problem, but still cruft that shouldn't be there. Call json on the response to get the data structure rather than doing this yourself. Even then, using JSON directly, while fine, skips over the Mojo::JSON::decode_json. It's a bit of a pain in the butt, but work hard to use the same parser everywhere in an application since they tend to have slight differences (say, like how they represent null, true, or false). Somewhere along the line, ChatGPT saw this code or something very similar. It then returns it to you with no intelligence about what it is doing or what it should do. It's very likely that the source ChatGPT saw is not only old, but also unsophisticated. You're likely just cargo-culting off StackOverflow with extra steps. But, this also isn't the way you probably want to write code. You don't want to return the token really, You want to add that to the user-agent so it provides it in every request without additional code from you. I have plenty of examples in Mojo Web Clients. That's another problem with the source material for these sorts of things: it's training itself off public data, but often our examples are mere demonstrations of ideas rather than advice on reliable software engineering (since we aren't going to write a book for every question someone asks).",u/briandfoy,https://www.reddit.com/r/perl/comments/10j0k00/comment/j5ki948,"""Somewhere along the line, ChatGPT saw this code or something very similar. It then returns it to you with no intelligence about what it is doing or what it should do."" IMO, this is quite irrelevant, because you must understand that whatever output - be it code, poems or whatever - from an AI-assisted service is not perfect. The main point is: it helps. And that's its main selling point today, because that's how StackOverflow also works: sometimes it's perfect, but most of the times it just helps, maybe because you have addressed the wrong audience, didn't word your question/problem correctly or otherwise. With ChatGPT you get an instant reply, and you can ask it to refine its reply. Instantly. Rinse and repeat. So if it use StackOverflow data (which I assume it does) it's already better in the sense that it's instant and filters out noise, especially personal attacks, or otherwise replies that intimidates the person asking the questions. ""It then returns it to you with no intelligence about what it is doing or what it should do."" Let's be honest, we have all been there and/or we have had colleagues who fits that description. :)",u/nobono,https://www.reddit.com/r/perl/comments/10j0k00/comment/j5l9s1c/,"You mentioned being new to perl and programming. Personally, I think ChatGPT is a great resource for these types of question. I asked it your question and copied the function from csv2fasta.pl",u/its_a_gibibyte,https://www.reddit.com/r/perl/comments/14capfv/comment/jol2a4b
Scala,✅,N/A,✅,N/A,2.77%,28,"111,969","605,988","1,508,526","271,184","540,327",14.87,3.87,4.1,0,1.8,51.3k,https://www.reddit.com/r/scala/,"Today I decided to test it by asking how one would use Scala 3 macros to get the types of the inputs and outputs of a method. It had some decent suggestions to do that for someone that is new to macros, but a lot of its answer was false, suggesting people use something called QuotesContext, not recognizing properly what extension methods are available for the Symbol type, and worst of all, trying to splice Type values into an Expr. If they can manage to get chatgpt to actually tell the truth consistently (like saying ""I don't know how to do that"" rather than just lying) I think it will be a nice resource for discovering how to do stuff you don't currently know how to do. Sadly, it's still got a nasty habit of making stuff up.",u/markehammons,https://www.reddit.com/r/scala/comments/124ocqh/scala_and_chatgpt/,"Well...this is a very hold thread but I'm using the latest copilot for scala available today of this post. I mostly use the ZIO framework. I was skeptical at first but I'm finding the suggestions get smart quickly and it is generating a lot of code fragments pretty well. I'm not claiming I can live without it, but as of today, I'm thinking it works pretty well for my scenarios. I could easily see not wanting to code without in the near future. I think using a framework like ZIO makes it easier to generate code fragments because the ZIO framework has a fairly predictable surface area, but that's just a guess.",u/agilesteel,https://www.reddit.com/r/scala/comments/ovoc8n/github_copilot_for_scala_does_it_work/,I wanted to start a new Scala project based on Clean Architecture aka dependency inversion. So I asked for a basic example to demo the principles. There was a lot of pretty code but ultimately it had no idea what this was about. The code was bs.,u/k1v1uq,https://www.reddit.com/r/ChatGPTCoding/comments/zpunkt/comment/j25ftsr/?utm_source=share&utm_medium=web2x&context=3
Delphi,N/A,N/A,N/A,N/A,3.23%,27,"51,475",310,552,0,0,0,0,0,0,0,3.8k,reddit.com/r/delphi,PSA: GitHub Copilot works with Delphi,u/EasywayScissors,https://www.reddit.com/r/delphi/comments/wnhk9x/psa_github_copilot_works_with_delphi/?utm_source=share&utm_medium=web2x&context=3,"As you can see, it is possible to use an AI for simple pieces of code to create basic Delphi code quickly. We can now go one step further and implement this in Delphi itself.",Marco Geuze,https://gdksoftware.com/knowledgebase/delphi-and-chatgpt,"I asked a series of Pascal programming questions to an AI chatbot system while testing its abilities, and the following page is a record of its responses.",u/sysrpl,https://www.reddit.com/r/delphi/comments/1006ybh/programming_pascal_using_an_ai_chatbot/?utm_source=share&utm_medium=web2x&context=3
Groovy,N/A,N/A,N/A,N/A,3.40%,26,"30,014","132,381","431,291","108,265","140,122",Unspecified,0,0,0,0,3.0k,https://www.reddit.com/r/groovy/,"And that it was possible to use the code created by the tool to generate some code that could be used to start your programming. This could save quite a bit of time for developers to use this as a starting point, and you dont need to have a large experience to start coding in UDFs in Groovy. It is also interesting that it has much knowledge about what is going on in an SAP universe, I would have thought it was more difficult to get data about it.",Figaf,https://figaf.com/chatgpt-groovy-code-help-for-sap-cloud-integration/,"Groovy is a great language with a ton of utility, and can scale like crazy! Write code as dynamic as you want, and choose to refactor into a more type-safe manner later. It's totally worth learning and having it in your toolkit. I program in it every day for many projects. All Java (99.9%) is also valid Groovy, so it's almost impossible not to understand and work with any Java code base you may come across once you get familiar with Groovy. ChatGPT and Github Co-pilot also write excellent Groovy code, which can aid you in learning, and just programming with it in general. It's still actively maintained, too! It's not going away an time soon.",u/West_Performance_129,https://www.reddit.com/r/groovy/comments/16kuh6s/comment/k1i0lqn/,"When I was building react-native-colo-loco, I had to write a Gradle script, which is written in Groovy. I know a little Groovy, but not much. So I focused on writing precise, accurate comments, and let Copilot suggest lines of code. I could then lean on my development experience to pick up on patterns and syntax, and go from there.",Jamon Holmgren,https://shift.infinite.red/getting-the-most-from-github-copilot-8f7b32014748
VBA,N/A,N/A,N/A,N/A,3.55%,25,"212,313","22,482","77,915","17,439","19,273",2.73,1.91,0,0,0,52.3k,https://www.reddit.com/r/vba/,"It depends on how you use ChatGPT though. I started a VBA project using methods I had used in the past. When that didnt work, I tried the Google approach, and still couldnt do what I wanted. Then, I remembered that ChatGPT does code, and decided to give it a shot. Honestly, what it gave me was riddled with errors, but I went through error by error and forced the AI to come up with corrections. I would copy-past the code into the prompt and ask it to identify potential errors and explain how they could be fixed. I got a really intimate understanding of the code, the reasons for the errors, and the strategies for correcting them. Even then, the code was flawed and ultimately failed. But I was able to use some of what I picked up throughout the process to build my own foundation for the code that would eventually work and used the AI to help fill in the blanks. I got a lot out of the experience. Its very important to ask very specific questions and to make sure that you understand the recommendations that it makes so you dont get lost in later steps. I used Google to supplement some of the information the AI gave me to improve my understanding. I spent a lot of time with this thing, and I think we both came out of it just a little better at what we do.",u/imartnm,https://www.reddit.com/r/vba/comments/108zy8k/comment/j3zcukr/?utm_source=share&utm_medium=web2x&context=3,"I've tried using it for VBA/Power Query code, but it's spotty at the best of times. It sometimes will reference functions that don't exist, or will ignore the carefully worded instructions you give it. At its current state it's most useful as a glorified google /stackoverflow search. It can also be helpful while debugging or just to throw some suggestions your way. Writing out the basic structure of my module and asking for recommendations/alternatives to certain implementations is fun and has taught me some new tricks. So it's cool, but not really reliable. Don't let it write your code for you or you might risk spending more time fixing it than you would have just writing it. I'd say it's VBA capabilities are better than its grasp on PowerQuery (M) .",u/Confuciusz,https://www.reddit.com/r/vba/comments/108zy8k/comment/j3wn54u/?utm_source=share&utm_medium=web2x&context=3,"Lol I just made a comment on another similar post where OP said GPT was incredible for Excel 😂 But yeah, GPT is still awful for VBA or long formulas. I tried giving clear instructions for simple tasks that it couldnt get right. Its cool, but long way to go",u/E_Man91,https://www.reddit.com/r/vba/comments/123zuo6/comment/je3ixwy/?utm_source=share&utm_medium=web2x&context=3
MATLAB,N/A,N/A,N/A,N/A,3.81%,24,"94,777","23,655","266,359","33,289","84,982",Unspecified,0,0,0,0,53.2k,reddit.com/r/matlab,"Yep, pretty much all the MATLAB code ChatGPT write for me worked. There was one instance whereby there was a multiplication that went away as it used * instead of .* To multiply two vectors. When I pointed that out, it corrected the code. In this case it was an order of operations issue and it correctly got it sorted by adjusting the parentheses. Pretty impressive so far.",u/worblyhead,https://www.reddit.com/r/matlab/comments/12fwjx5/comment/jficv03/?utm_source=share&utm_medium=web2x&context=3,"Yes, you can use Co-Pilot with Matlab code. However, it won't work with the usual MATLAB IDE, so you have to use one of the supported IDEs (e.g. VS Code or JetBrains).",u/Latter_Trouble_3227,https://www.reddit.com/r/matlab/comments/y07uop/comment/jbgoj6h/?utm_source=share&utm_medium=web2x&context=3,"Why would you think such a simple plot with callback on click would not work? Now I wonder if it made the callback zoom-safe. I was using update callbacks after only 8 months of college experience with Matlab. And yet, I cant make chatGPT to give me the correct answer to a function inverse involving rational polynomials (at least the steps it got right, allowed me to remember how to do function inverses)",u/LevelHelicopter9420,https://www.reddit.com/r/matlab/comments/12fwjx5/comment/jfll3tu/?utm_source=share&utm_medium=web2x&context=3
VB.NET,N/A,N/A,N/A,N/A,4.07%,23,"335,092","15,653","35,848","2,915",0,Unspecified,0,0,0,0,145k,https://www.reddit.com/r/dotnet/,"What I've seen from gpt and copilot is that it's a good junior and sparring partner, but it's no substitute for a senior. It lacks reasoning and analytical capabilities to be a true senior. For example, it can tell you the difference between mediator and nservicebus (dotnet environment), but it cannot explain which one you should use for the project you are working on.",u/KenBonny,https://www.reddit.com/r/dotnet/comments/16j8il5/comment/k0qjb6u/?utm_source=share&utm_medium=web2x&context=3,"I've been using it for a LOT of utility classes, regex expressions, and things like that. It's nowhere near replacing my job yet but it's saved me countless hours on some rather trivial but tedious tasks. Most recent today was a function that converts a string to camel case, worked perfectly right out of the gate. Yea I probably could have found the same function on google in 10 min, but I would have had to comb through ads, and useless posts on stack overflow, before I found one I knew would be performant. It's not laziness, the rest of my job is busy enough, I could have spent an hour or two figuring out the logic from scratch but simply put, this is a far more efficient use of my time.",u/Ch33kyMnk3y,https://www.reddit.com/r/dotnet/comments/10s8eld/comment/j704bu4/?utm_source=share&utm_medium=web2x&context=3,"Yeah, I just use the free version but I'll ask it to do something, it kinda does it, I ask, ""Is this part necessary?"" It then responds with oh you're right and redoes it but in a way that still has questions, like I wanted it to explain why it did something the way it did and it takes that as I'm saying it's not really needed. Then I ask it to explain the new changes and it reverts things to the way it did them before thinking I spotted an error in how it redid the code. 🤦‍♂️ I still think it's a nice option to springboard learning or get quick explanations of things with examples, but the more I've used it the less I'm convinced it'll be stealing my job anytime soon. What I actually fear more are engineers and/or middle managers who don't know any better trusting everything it suggests who then think this makes engineers less needed or useful.",u/ModernTenshi04,https://www.reddit.com/r/dotnet/comments/15od4zx/comment/jvr5vur/?utm_source=share&utm_medium=web2x&context=3
R,✅,✅,N/A,N/A,4.23%,22,"499,872","51,800","506,309","88,649","91,654",Unspecified,0,0,0,0,36.8k,https://www.reddit.com/r/Rlanguage/,It's even helpful for example datasets. If you want to test or play around it will create a dataframe example. Also if you know one programming language it can help translate. It will even rewrite the code to look better. E.g. write this code in python pandas but make it more readable like r dplyr. Anything regex is nice as I don't have to hope a specific example is on stack overflow. Chat cpt from my experience will often favor going things with for loops instead of taking advantage of dplyr or pandas functions. With everything chat gpt tho check the code as it will confidently give you an answer and even print out a fake output. Often pointing out its error gets chatgpt to fix the code.,u/2truthsandalie,https://www.reddit.com/r/Rlanguage/comments/17q56xq/comment/k8b2phr/?utm_source=share&utm_medium=web2x&context=3,"I have found it hit and miss. I was able to knock up simple Shiny apps in a minute (https://youtu.be/8oJ1HtkpDt0) but have had it write non-sense code for some other things I was trying (especially webscraping). GPT Studio is pretty good (demo here https://youtu.be/QQfDTLExoNU) but has someone else mentioned, take a look at Github Copilot",u/DrLyndonWalker,https://www.reddit.com/r/Rlanguage/comments/17q56xq/comment/k8bi6nq/?utm_source=share&utm_medium=web2x&context=3,"I do it constantly, not only for debugging which it is spectacular at, but for especially tedious things like using ggplot. If you can think it, GPT-4 and the other specialized models can code it. The real key is to put thought into the question you want to answer with the code and then to very deliberately tell the GPT what to do. For example, “I have a data frame with x, y, z variables. Please write R code to perform a, b, c statistical analysis. Place the results into a variable called results.” And so on.",u/jrdubbleu,https://www.reddit.com/r/Rlanguage/comments/17q56xq/comment/k89wmhi/?utm_source=share&utm_medium=web2x&context=3
Swift,✅,N/A,✅,N/A,4.65%,21,"331,145","425,921","1,334,455","325,962","2,731,776",Unspecified,0,0,0,0,107k,https://www.reddit.com/r/swift/,"Just a general tip: even though it's a bit out of date, chatgpt will answer these questions much faster and sometimes more accurately than Reddit can. I've pretty much replaced Google with chatgpt and my productivity is up and stress is down. For questions about the newest SwiftUI stuff try Google Bard. The LLMs aren't perfect. There's still a place for Reddit and stack overflow, but I'd check with an LLM first.",u/[deleted],https://www.reddit.com/r/swift/comments/174vuyo/comment/k4eayl9/?utm_source=share&utm_medium=web2x&context=3,"I've tried copilot with SwiftUI and it's good for auto generating some things like specific styles, but not so good for other parts. Sometimes it helps with unit tests,but others it gets stuck in a loop.",u/Zagerer,https://www.reddit.com/r/swift/comments/13929qe/comment/jj0pti9/,"Here is my journey coming from C++: Read through ""A Swift Tour"" and follow along in a Swift Playground. Many times, I feel, ""Huh, this part is so much better than C++."", or ""This is pretty much the same,"" I don't force myself to learn everything though, for example, I skipped protocol entirely. This process took me a few hours. As I dug into SwiftUI, I ran into syntax I didn't understand. Instead of looking up the official document, I just Google or ChatGPT it. When I start doing things in a C++ way that I always hate, I often pause and search if Swift does it better. Oftentimes times, Swift does do it better! Still, I carry some baggage from C++ and later notice if I had done it differently, I would have saved myself a lot of trouble (for example, really thinking about whether things can be null or not). Don't be afraid of re-writing; it is part of the process. Today, I am still learning; however, I started to catch myself speaking in C++ ""accent"" using Swift, and oftentimes, I can Google/ChatGPT my way out of it.",u/AppleHitMyHead,https://www.reddit.com/r/swift/comments/1724gke/comment/k481769/?utm_source=share&utm_medium=web2x&context=3
Assembly,N/A,N/A,N/A,N/A,5.43%,20,"43,572","14,301","119,341","10,605","50,063",2.36,0.78,0,0,0,16.2k,https://www.reddit.com/r/asm,"Assembly isn't one language, it's a general term for any human-readable representation of a processor's ISA. There are many assembly languages, and there are even different representations of the same ISA. I'm not sure what your book you're using but there are operand order differences between AT&T and Intel x86 (although your example looks like AT&T). You shouldn't be using ChatGPT for any subject you aren't already familiar with though, or you won't be able to recognize when it's hallucinating, or even when it's simply lacking context. Just use a normal, reputable resource like the book you're following. I recommend checking out this wikibook for free online: https://en.wikibooks.org/wiki/X86_Assembly",u/the_Demongod,https://www.reddit.com/r/asm/comments/14q5qi8/comment/jqlmfvn/?utm_source=share&utm_medium=web2x&context=3,"ChatGPT makes a good attempt, but it doesn't actually understand code — ESPECIALLY assembly language, where each instruction exists in a lot of context — and will usually have some kind of bugs in anything it writes.",u/brucehoult,https://www.reddit.com/r/asm/comments/14q5qi8/comment/jqp8rig/,"Idk why all the chatGPT comments are all downvoted, guys it is inevitable that it is going to be a standard part of our lives now. The sooner students start using it the sooner people will realize its limitations. It is a great learning tool and I use it when learning a new subject.",u/dvof,https://www.reddit.com/r/asm/comments/105vl0v/comment/j3hn8xp/?utm_source=share&utm_medium=web2x&context=3
Dart,N/A,✅,N/A,N/A,6.02%,19,"91,732","171,518","230,340","241,706","264,888",Unspecified,0,0,0,0,39.8k,reddit.com/r/dartlang,"The amazing thing about LLMs like ChatGPT is that they develop a kind of ""language sense"" and ""know"" how to stick together the right tokens to achieve a certain goal. They don't ""understand"" Dart - or any other programming language. They just emit tokens that I probably want to see :) Also, we cannot fully comprehend the amount of data that has been processed. Billions and billions of lines of code in dozens if not hundreds of languages.",u/eibaan,https://www.reddit.com/r/dartlang/comments/142fbkc/comment/jnoc1ph/?utm_source=share&utm_medium=web2x&context=3,"Please note that ChatGPT is not sure about anything. It communicates that it knows what it says is true, but it's known to make up facts. Luckily the answer to your question is in the Dart docs. Alternatively StackOverflow has a sensible answer: https://stackoverflow.com/questions/57936263/dart-set-from-vs-set-of",u/Rusty-Swashplate,https://www.reddit.com/r/dartlang/comments/10yiu7d/comment/j7yflw0/?utm_source=share&utm_medium=web2x&context=3,antastic recommendations. I actually did have ChatGPT help me override toString for a ton of these classes nested within classes in this giant object I'm trying to print so I can mock. Didn't think to tweak the toString method like that. Not sure I understand your quoted getter though with the slashes. I'll play around with it Monday though.,u/john2046,https://www.reddit.com/r/dartlang/comments/1390c2j/comment/jj0spnc/?utm_source=share&utm_medium=web2x&context=3
Lua,✅,✅,N/A,N/A,6.09%,18,"22,413","139,939","717,566","166,471","366,575",6.58,2.81,2.9,0,0,19.0k,https://www.reddit.com/r/lua/,"First of all, don't use ChatGPT if you want to learn Lua. Refer to the well-written resources such as the ""Programming in Lua"" book instead.",u/appgurueu,https://www.reddit.com/r/lua/comments/11dkwdl/comment/jacqn3z/?utm_source=share&utm_medium=web2x&context=3,Ask chatGPT to convert java / concepts into language to Lua... works surprisingly well,u/gluecat,https://www.reddit.com/r/lua/comments/12wj39f/comment/jhhg8qi/?utm_source=share&utm_medium=web2x&context=3,"Do you not find Copilot frustrating? I cannot stand it, it's the worst thing for me. Whenever I've actually needed help with something, it's either: Gave me absolute garbage code. Missed the point entirely. Maybe I'm just bad at giving it instructions, who knows 😅",u/VitexHD,https://www.reddit.com/r/lua/comments/13tfqs2/comment/jlytud8/
Ruby,✅,N/A,✅,N/A,6.23%,17,"228,663","2,482,982","5,645,881","1,204,510","2,905,832",23.82,10.95,11.6,0,4.1,81.5k,https://www.reddit.com/r/ruby/,"Note that the failure mode for ChatGPT is that it will gaslight and lie to you. If you don't give it enough context, or the method names are ambiguous, there's a potential for it to make up explanations that sound plausible, but are dangerously incorrect. I'd advise talking to your team about the things that confuse you germane to your codebase, and only using ChatGPT for general Ruby content.",u/throwaway-aso2fb,https://www.reddit.com/r/ruby/comments/16y3bxq/comment/k36os5n/?utm_source=share&utm_medium=web2x&context=3,"Not using copilot for the controversy around it stealing source code. Manager gave me a license however to use tabnine at the moment. In...basic scaffolding code it helps me speed up a bit by generating the blocks for example to write specs quickly, providing about 75% of the structure needed to get the spec fleshed out, e.g faster let declarations and do blocks. But for writing actual code I'm fighting it more than its helping me, since it simply doesn't understand what I am trying to write. Documentation is....hit&miss depending on whether it gets the meaning behind the variable names.",u/OlivarTheLagomorph,https://www.reddit.com/r/ruby/comments/zq847a/comment/j0yy2y8/?utm_source=share&utm_medium=web2x&context=3,"I use Github copilot (which uses openai's codex) and occasionally throw some questions to ChatGPT. Currently I use it for Ruby and Kotlin. I committed to Copilot after trying it for five minutes. Total game changer. Time spent doing grunt work, writing repetitive tests etc, has dropped by 90% and I'm left with a lot more time to implement elegant solutions rather than throwing in quick fixes to meet tight deadlines. Sometimes it almost seems like it can read my mind. You still need to have the experience and expertise to ensure it hasn't missed the point - it doesn't always have the full context of the problems you're working on - but I would wholeheartedly recommend it to any developer as a way to increase productivity.",u/onionionion,https://www.reddit.com/r/ruby/comments/11usmxs/comment/jcqdd8q/?utm_source=share&utm_medium=web2x&context=3
Kotlin,N/A,N/A,✅,N/A,9.06%,16,"92,664","346,824","816,744","174,810","545,403",Unspecified,0,0,0,0,73.8k,reddit.com/r/kotlin,"chatgpt doesn't know that Kotlin can use java libraries, which makes sense since it knows nothing. Chatgpt doesn't know that you target older Android versions with new languages. The reason why there are more java programs for old reasons is just historical and doesn't benefit java in any way. But chatgpt will never understand this since it can't understand anything. Here chatgpt is correct. It's amazing how it can produce a correct answer without having any idea what it's doing.",u/Feztopia,https://www.reddit.com/r/Kotlin/comments/zo6jpo/comment/j0lv16b/?utm_source=share&utm_medium=web2x&context=3,"If you want solid foundation, don't. ChatGPT is known for inventing things and confidently state it as if that's true, if you don't have knowledge to judge its output, you can't fully trust the answer.",u/duongdominhchau,https://www.reddit.com/r/Kotlin/comments/10tzne0/comment/j79pkls/?utm_source=share&utm_medium=web2x&context=3,"Not mentioned yet, but I really believe ChatGPT and Copilot (and whatever is coming down the pike) really reduces the “learning a new language” hump for EVERY language, and definitely for Kotlin. Asking it to do idiomatic Kotlin usually produces quite good results, and asking it how to do a Java thing best in Kotlin definitely does well also. So every new Java developer will be adept at Kotlin even faster than before.",u/LoveSpiritual,https://www.reddit.com/r/Kotlin/comments/14bpuym/comment/jokay83/?utm_source=share&utm_medium=web2x&context=3
Rust,✅,✅,N/A,N/A,13.05%,15,"39,147","400,875","947,751","239,196","941,468",40.35,2.68,2.8,0,3.5,256k,https://www.reddit.com/r/rust/,"I think programming is heading the same way as translation - a machine can give you a first draft, but experience is needed to verify and fix the resulting code. In the case of translation, many tools exist that will translate text from one language to another, but the results may be slightly or wholly inaccurate: knowledge of both the source and target languages is needed to verify the result. The same is applies to code generation by GPT. The combination of a human and machine will probably give better results, faster. But unsupervised code generation in a general sense is still a way off.",u/remontantcoprology,https://www.reddit.com/r/rust/comments/zgkuq6/comment/izi6p21/?utm_source=share&utm_medium=web2x&context=3,"The issue is that most of the time the code wont compile or have UB so... It could be blazingly fast to give you text but if need 5 or 10 minutes per try to check is doing what i want i prefer to do the code myself and then i am sure is doing what i want. In other langs like Python maybe but in complex langs like C++ or Rust is not as good because of it complexity, i havent tried but in Rust you cant make a buble sort loop without swap(i, j) and GPT could try the usual aproach of array[i] = array[j] which wont work at all",u/JuanAG,https://www.reddit.com/r/rust/comments/zgkuq6/comment/izhfvi3/?utm_source=share&utm_medium=web2x&context=3,I searched the huggingface hub for some LLM to help Rust coding. But most of them just for python. does anyone knows some LLM for just for Rust. Or how to build one. thanks,u/AbleEstablishment155,https://www.reddit.com/r/rust/comments/16iz3fj/is_there_a_specific_llm_for_rust_coding/?utm_source=share&utm_medium=web2x&context=3
Go,✅,✅,✅,✅,13.24%,14,"71,541","2,642,302","4,859,219","1,815,979","7,318,078",118.37,19.28,19.8,21.4,15,224k,https://www.reddit.com/r/golang/,"Personally for me this is the completely wrong approach. Having the ai write it for you and then understand what it wrote is less than optimal. You should use chatgpt to ask questions, not write code if you dont understand it. Use it as a mentor who cant be busy to answer your questions. Not as someone who will complete your homework and then maybe youll try and understand it afterwards. If a student actually wants to learn a subject, do they get someone to complete their homework? You get what I mean? If your goal is to just complete a project in anyway. Then maybe might work but most likely wont. You should understand and come up with the logic behind everything you write before letting ai write it for you. Copilot is good for predictable sequences, but most things logic wise it fails as it does not know implementation.",u/vEncrypted,https://www.reddit.com/r/golang/comments/16cs5md/comment/jzl928k/?utm_source=share&utm_medium=web2x&context=3,"ChatGPT (mainly the UI) set a bad example, AI has been way more helpful to me for learning Go than going on Google or reading official docs, but not ChatGPT and rather Forefront, which can use GPT 3.5/4 or their own models but regardless they have a Internet Search function that uses the model to simply summarize dozens of actually real pages it found in a way that is easier for me to understand compared to the original, specially since I can keep chain-asking ""what is this/what is that"", and all from me explaining step-by-step with ""janky"" English and the full code. It also lists the pages it used so I can just click them and check it myself, (spoiler alert) it doesn't make as many mistakes as people think, even without search it does a great job understanding code, it won't usually solve more than basic problems and just keeps giving you different snippets to try but most of the time I end up fixing the issue because of the answers, even if the code doesn't work, I don't know how else to explain it. Of course my first language isn't English but I also learn almost entirely by example and docs don't usually have snippets for every little thing the code can do, it also sounds a bit advanced to me because it's just a lot of text with (programming/Go) terms that I usually don't use.",u/DarkCeptor44,https://www.reddit.com/r/golang/comments/17okcs8/comment/k7zl74p/?utm_source=share&utm_medium=web2x&context=3,"When I ask ChatGPT about it, it suggests model.go, view.go, controller.go etc. but says itself that the MVC concept does not exist in Go. So I'm interested how developer with some more experience than I in desktop apps would struct it.",u/Prestigiouspite,https://www.reddit.com/r/golang/comments/153pahy/comment/jsmdut2/?utm_source=share&utm_medium=web2x&context=3
PowerShell,N/A,N/A,N/A,N/A,13.59%,13,"115,393","72,946","276,134","62,960","195,597",3.37,0.69,0,0,0,227k,https://www.reddit.com/r/PowerShell/,"No, as of now LLM is Just another tool in the toolbox. It makes good coders more effective.",u/JesterOfSpades,https://www.reddit.com/r/PowerShell/comments/13h8ak1/comment/jk3o7v7/?utm_source=share&utm_medium=web2x&context=3,"ChatGPT is not a teaching tool. It isn't capable of understanding, so it cannot properly explain what it's doing. Anything it produces is suspect, because it isn't designed to produce working, clean, modern PowerShell code, it's designed to be a chatbot that puts words next to other words weighted by context clues.",u/lanerdofchristian,https://www.reddit.com/r/PowerShell/comments/171h3id/comment/k3s7ren/,I've had a mixed bag with copilot. Sometimes it has given pure gold that I didn't think about but other times it suggests super lazy things like += arrays instead of creating a non-fixed array and adding to it. OH the hands down biggest thing it has helped with is working with pester testing. Still learning about it but copilot has certainly helped a bunch.,u/Eimee_Inkari,https://www.reddit.com/r/PowerShell/comments/14jy6n1/comment/jpq3yg9/?utm_source=share&utm_medium=web2x&context=3
PHP,✅,✅,✅,N/A,18.58%,12,"1,462,608","2,550,461","9,196,172","2,286,391","4,036,079",183.19,61.41,64,0,13,162k,https://www.reddit.com/r/PHP/,"I've tried Chat GPT as I've seen some Youtube videos where people act in amazement while saying, ""wow! This s amazing, I just give ChatGPT a class and it gives me all the unit tests for it within seconds! Total game changer!"". Yeah, doesn't work worth a shit, at least not for me. It'd easier to just write the unit tests than refactor what Chat GPT gave me.",u/mdizak,https://www.reddit.com/r/PHP/comments/13l0hgf/comment/jknw4z9/?utm_source=share&utm_medium=web2x&context=3,"I am under the impression that the update frequency of PHP libraries has gone down since ChatGPT was released. My interpretation is, that many companies and developers are looking deeply into the AI stuff. And that is not in favor of PHP so that the attention is moving away from PHP solutions (at least temporarily). Once the AI dust has settled we will see the real impact AI has on the PHP market. Anything else what might be relevant was already posted by other members here, so I won't go there.",u/mission_2525,https://www.reddit.com/r/PHP/comments/16yb0d9/comment/k3t6vwq/,Might not be the answer you're looking for but it probably wouldn't be hard to write your own PHPCS sniff for it. Edit: Here is ChatGPT going over how you'd write a sniff for it. I haven't tested it so you might need to modify it a little bit to get it working.,u/soowhatchathink,https://www.reddit.com/r/PHP/comments/14k6z6i/does_codesnifferecs_have_the_possibility_to/jq43c4z/?context=8&depth=9
C,N/A,N/A,N/A,N/A,19.34%,11,"400,941","1,300,955","5,240,188","1,285,709","3,741,913",222.88,183.83,48.9,0,55,147k,https://www.reddit.com/r/C_Programming/,"Hard agree with the last part. ChatGPT & other AI tools can be pretty awful for non-trivial C code. It often spits out things that might work in other syntactically similar C-style, such as using string literals as switch cases, or concatenating string literals with the + operator. It's the worst nightmare for someone who's actively learning to code; it will confidently answer your question incorrectly, while sounding completely reasonable.",u/MyuuDio,https://www.reddit.com/r/C_Programming/comments/17rzzy9/comment/k8mqxv5/,"ChatGPT is failing you twice. First, because it's telling you about a bogus problem. Second, because it is not telling you about a real problem. The bogus problem is the redeclaration issue. It's technically correct that you will get a diagnostic if you try to define the same local variable twice in the same scope. But the solution there is trivial: don't define it, just re-use it. The more pernicious problem is handling or not handling the failure of realloc. When you overwrite the list variable with the result of realloc there is the possibility that the result is NULL. In that case, you have ""lost"" your original pointer.",u/aghast_nj,https://www.reddit.com/r/C_Programming/comments/178cc4l/comment/k4z9cby/?utm_source=share&utm_medium=web2x&context=3,"I've been using copilot for nearly two years now. For me it's just a nice auto complete. I don't think it ever solves anything for me. It just makes me faster, especially with repetitive shit.",u/Meatball_Subzero,https://www.reddit.com/r/C_Programming/comments/16geaal/comment/k078frr/?utm_source=share&utm_medium=web2x&context=3
C++,✅,✅,✅,✅,22.42%,10,"801,823","2,767,540","9,245,881","2,255,179","5,192,579",192.84,87.73,290.5,69.9,52,260k,https://www.reddit.com/r/cpp/,I use ChatGPT for tools and libs where the documentation is horrendous and its a coin toss as to whether it confidently talks truth or nonsense. I dont think its a good idea for beginners to be leaning on it as a teaching aid.,u/RainbowWarfare,https://www.reddit.com/r/cpp/comments/172vc4q/comment/k3z07sj/?utm_source=share&utm_medium=web2x&context=3,"My experience with ChatGPT is that it sucks ass with C++. Anything beyond basic syntax and programming it just gets wrong. My typical interaction is to ask it something specific, then spend the next 3 queries clarifying and then the next few pointing out issues in the code or methodology. I cannot recommend.",u/TheBrainStone,https://www.reddit.com/r/cpp/comments/172vc4q/comment/k3z96kd/?utm_source=share&utm_medium=web2x&context=3,"I have github copilot enabled in my ide, so whatever it suggests I can either use it or ignore. I find it helpful in writing docstrings and filling out somewhat repetitive rows (e.g. pattern matching cases). But otherwise it is not that clever. I also use chatgpt in some rare cases when I am curious how would chatgpt solve this or that problem. It is good to write some simple, short functions; but it is not reliable enough to write medium to very complex algorithms.",u/Asleep-Dress-3578,https://www.reddit.com/r/cpp/comments/172vc4q/comment/k3zprne/?utm_source=share&utm_medium=web2x&context=3
C#,✅,✅,✅,N/A,27.62%,9,"1,606,619","1,191,927","4,581,919","1,489,756","2,521,561",128.37,36.83,38.4,0,21,233k,https://www.reddit.com/r/csharp/,AI tools give me the code I need maybe 20% to 40% of the time. Another 30% or so I have to tweak it to make it work. For the remaining percentages what it spits out needs so many changes it's easier to write it myself than expect that I tweaked it without mistakes. Sometimes it feels like CoPilot might slow me down since now I tend to hit a new line and wait 2-3 seconds to see what it suggests.,u/Slypenslyde,https://www.reddit.com/r/csharp/comments/1768d7o/comment/k4kguvf/?utm_source=share&utm_medium=web2x&context=3,"I haven't found any in IDE plug-in that's been all that great. I've used copilot in conjunction with chatGPT and find myself using chatGPT way more than copilot. Keep in mind I use LLMs more as an enhanced search engine than a code writer. For code, I find it helpful to get a second opinion on a refactor, handing over error messages, writing one liners for some logic, and handing over a file to act as a second pair of eyes for what I can't see. Outside of code, I use it as a rubber ducky that can talk back when trying to think through some problems. Though tbh, the act of thinking about my problem and structuring it out to a prompt often solves my problem before I even hit send. Actually, now that I think about it. The damn thing has been a God send for writing and debugging terraform.",u/telewebb,https://www.reddit.com/r/csharp/comments/1768d7o/comment/k4kod5z/?utm_source=share&utm_medium=web2x&context=3,"Call me old, but I prefer to code things myself. AI is good to give you hints and steer you in the right direction. It can also write a lot of bullshit that looks like legit code. Then, debugging code that you didn't write gets very difficult. Remember that you write code once, but will read it many, many times. Have your boss pay for training.",u/quebecbassman,https://www.reddit.com/r/csharp/comments/1768d7o/comment/k4kgylh/?utm_source=share&utm_medium=web2x&context=3
Java,✅,✅,✅,✅,30.55%,8,"1,911,018","3,939,936","14,008,719","3,752,951","9,232,281",271.43,107.7,113.8,120.3,41,307k,https://www.reddit.com/r/java/,"Anyone who tried to use ChatGPT to solve some real-world programming issues knows, that even if you are able to replace 1-2 juniors with it, you will lose 1 senior to filter out the nonsense it can produce with full confidence. Not worth it. What's worse - I've seen many beginners treating AI as some form of oracle and believing everything it spits out even if it's all false. But AI is a powerful tool and it's worth checking it out and tracking its progress. Who knows what it will look like in a few years?",u/ByerN,https://www.reddit.com/r/java/comments/163eltc/comment/jy2asuq/?utm_source=share&utm_medium=web2x&context=3,"I have to wonder if AI translation is determinate. I use Github Copilot fairly often, and it returns schizophrenic suggestions apparently at random. It also seems stuck in pre Java-8 for syntax (I've never seen it use switch expressions, and it rarely uses streams).",u/benjtay,https://www.reddit.com/r/java/comments/16lu4wb/comment/k14rnx3/?utm_source=share&utm_medium=web2x&context=3,"I've been using GitHub Copilot with Android Studio for a couple of months. It's actually amazing. It doesn't produce a ton of suggestions, but the ones it does produce are right a lot of the time. Even the wrong ones are often pretty close and only need minor editing. It won't write full classes but it can write short methods or blocks of code. Highly recommend.",u/BarryFruitman,https://www.reddit.com/r/java/comments/176t5vb/comment/k4rwd2t/?utm_source=share&utm_medium=web2x&context=3
Bash,✅,N/A,N/A,N/A,32.37%,7,"154,693","866,313","3,605,350","574,292","2,121,149",8.69,3.01,0,0,0,61.7k,https://www.reddit.com/r/bash/,"chatgpt is very bad at bash. Every script that someone has posted here has had some really glaring errors, often data-destructive ones. In general for every single use-case of chatgpt (or any other generative model) unless you understand the correct output you should not trust it. You can use it to produce documents and reports or even scripts, but you should always read the output carefully and validate that what it says is correct.",u/[deleted],https://www.reddit.com/r/bash/comments/124h7gj/comment/jdzbtvp/?utm_source=share&utm_medium=web2x&context=3,"I've tried getting it to write some code. Very little is useful. It still very much requires education and experience with the tools you use in order to get effective, clean, and efficient code. I had tried some python scripts, but you need to specify libraries and tools to be used, and it doesn't do that well. As it learns more, it may become better at this, but for now it's a neat toy without real world benefits",u/RandomXUsr,https://www.reddit.com/r/bash/comments/zix2am/comment/iztmsp3/?utm_source=share&utm_medium=web2x&context=3,"This is more general advice for using chatGPT for generating bash scripts. chatGPT is a powerful tool, but it has both general and bash/linux related weaknesses. Never run script you dont understand. That is a hard pill to shallow when learning bash, but thankfully you can ask chatGPT to explain its reasoning. To be sure, open a new conversation and ask for explanation of part of the code there. You can also ask another instance for a general explanation of a new syntax or command, and then cross-check the original code. After seeing what chatGPT knows about an individual command, it doesnt hurt to quicklycheck the man-page anyway. ChatGPT is prone for using “general” syntax and flags even when some command doesnt exist. Lastly, commands can change through years and environments. Your man-pages tell you what version you have. Its a good strategy to ask if any tools already exist for the task or are build in, before asking for a bash script. For example you could script dropping your ssh-key in a remote machines .ssh-dir and then appending it to the trusted-keys file (or in folder) - or you can just use the ssh commands build in add-key option. There are a lot of tools build in to your average linux installation, and your distros repos are full of even more lightweight, trustworthy tools (as long as you stick to the official repos). If you arent exactly sure how a script behaves or if the syntax is robust, create your own test environments. You can create virtual (or real) directory structures, quickly fill them with very small files and run the script without touching your actual data. Ask chatGPT for more information (and use above steps to understand what it says). Related to the last point, pay attention to especially these aspects of any script chatGPT spews back: hardcoded paths (or less strictly, any path that isnt declared as a variable on the start of the script). If instead of a robust test environment, you just use a directory with subdirectories, hardcoded paths can escape that environment, connections outside your machine/local network: While I feel it is unlikely that chatGPT will compromise your system by opening an unsafe connection to unsafe address, the risk is worth mitigating. What if the first guy who got that address noticed its not used, and bought it to distribute malware, hoping chatGPT offers it again? But more likely problem is that you can rapidly pull a lot of data from the internet. It just opens up more doors to make a mess, modifying files in /etc, or your bootloader. You can cause all kinds of damage, including permanently disabling rights to modify the files to fix it (misconfigured privileges), making your system unbootable (fstab, grub), and just generally messing up your system. Back it up before any changes, read the man-pages twice, make small tests (and remember you usually need to reload systemd or reboot before changes take effect)",u/stepbroImstuck_in_SU,https://www.reddit.com/r/bash/comments/123buum/comment/jduund7/?utm_source=share&utm_medium=web2x&context=3
TypeScript,✅,✅,✅,N/A,38.87%,6,"224,865","2,043,216","4,224,408","1,455,167","2,941,085",131.46,24.59,24.9,0,9.2,115k,https://www.reddit.com/r/typescript/,"ChatGPT is great for common knowledge, but it just bullshits for more esoteric stuff. Case in point: & {}: This basically ""seals"" the type, making it impossible to add new properties to it. This is just pure nonsense as near as I can tell. A big red flag is how vague it is. What does ""seals the type"" mean? For that matter, what does it mean to ""add new properties"" to a type? I messed around with it a bit in a TypeScript Playground and I can find no behavior that remotely corresponds to this explanation from ChatGPT.",u/delventhalz,https://www.reddit.com/r/typescript/comments/17i01kj/comment/k6tvg8v/?utm_source=share&utm_medium=web2x&context=3,As someone also somewhat new to typescript but very comfortable with javascript I know what you're going through. Something I've found to be super useful is asking chatGPT questions when something doesn't make sense to me. It usually provides a correct type and allows me to move on with what I'm trying to do instead of banging my head against the wall for 20 minutes.,u/k3l2m1t,https://www.reddit.com/r/typescript/comments/13h0n0h/comment/jk2yehs/?utm_source=share&utm_medium=web2x&context=3,I don't think copilot supports typescript more than any other language. It often gives me incorrect suggestions when it comes to typescript. Probably the only reason I might end up dropping it actually. .,u/thinkmatt,https://www.reddit.com/r/typescript/comments/pzmlvt/comment/hf4khk4/?utm_source=share&utm_medium=web2x&context=3
SQL,N/A,N/A,N/A,N/A,48.66%,5,"667,216",123,1170,0,0,18.15,5.67,0,0,0,162k,https://www.reddit.com/r/SQL,"I've used ChatGPT Plus, basically the paid version using GPT-4, and while it has helped suggest some new ways of querying stuff that I hadn't considered, it also just completely made things up. Even when I asked to clarify, like ""are you sure that function actually exists?"" it would apologize and then say the exact same wrong thing lol. There's no real bullshit filter for these LLMs.",u/paymesucka,https://www.reddit.com/r/SQL/comments/14e04k3/comment/josxeg3/?utm_source=share&utm_medium=web2x&context=3,"I'm a DBA, 15 years. Chatgpt and other AIs are great up to about the skill level of a intern you'd hire as a jr. After that level of task... it takes more time and effort to vet it's output than it saves. I don't think it's a good tool for those learning, as they won't ever develop the skill to spot when and where the AI is wrong. I think there will be a wall of skill that will be impossible to climb for those who use it rather than working through problems on their own first. If you have the discipline to work the problem yourself and only use it if really stuck or to try an alternative, then it can be a nice assistant, like a personal intern that occasionally lies and tries to set you up for failure.",u/Festernd,https://www.reddit.com/r/SQL/comments/127zawr/comment/jeia6hv/?utm_source=share&utm_medium=web2x&context=3,"Mostly to debug, but I change the table names for privacy reasons. Once in a while I'll ask it to write code from my plain English when I'm trying to solve a problem. I'll give it my broken code or some context first.",u/feigndeaf,https://www.reddit.com/r/SQL/comments/12oo0lm/comment/jgj204k/?utm_source=share&utm_medium=web2x&context=3
Python,✅,✅,✅,✅,49.28%,4,"2,174,258","6,058,516","17,546,799","4,367,863","11,547,682",190.73,52.03,11.6,55.9,16,1.2m,https://www.reddit.com/r/Python/,"ChatGPT will make some programmers obsolete. Not because it can program better than them, but because one competent programmer that masters ChatGPT will be able to do the job of 2-3 of his colleagues in the same amount of time.",u/Feb2020Acc,https://www.reddit.com/r/Python/comments/10ytgkk/comment/j806l89/?utm_source=share&utm_medium=web2x&context=3,"I've used it for ideas. Ask it for code for something I'm writing just to see what it suggests. But I don't just copy/paste the code into my project. The first rule of using ChatGPT for coding is, you should only be using ChatGPT for coding if you don't actually need to use ChatGPT for coding. Like, it's good for ideas because it's basically trained on Stackoverflow and the docs, and it's impossible to have heard of or remember every package, module, and function. But if you don't understand what it gives you and you just paste it in, you're not learning anything and are leaving yourself open to big problems.",u/bamacgabhann,https://www.reddit.com/r/Python/comments/12wsx2g/comment/jhgagc7/?utm_source=share&utm_medium=web2x&context=3,"I had this exact same crisis of faith in Python about a year ago. The thing that really annoyed me was how much more effortful it was to create some of the features typed languages (especially C# with great interface support) had with a weaker guarantee. AI has fundamentally changed that for me. The SOTA LLMs can code best in python, can create type hints, documentation, and basic assertions/tests nearly free, and the localized hints about type give the AI great hints on how to code. If you accept AI as ""augmented intelligence"" then coding with python can be a very productive experience in 2023.",u/marr75,https://www.reddit.com/r/Python/comments/15r05mq/comment/jw5yilu/?utm_source=share&utm_medium=web2x&context=3
CSS,N/A,N/A,N/A,N/A,52.97%,2,"800,588","443,082","4,314,244","436,767","1,673,966",145.33,22.67,0,0,0,115k,https://www.reddit.com/r/css,"I'm not sure how it could help learn. I spent a little while messing with it and trying to generate some html/css/js for a simple responsive hamburger menu. Results were mixed. It got me most of the way there, but had trouble really putting it all together into one menu that worked as intended. I could have spent more time trying to manipulate it, but that would've taken more time that it would have to make the thing by hand. On some level it's just google with extra steps since you need to check and verify everything it outputs. I found that Lucas from LTT had a good assessment of it: it's usually pretty good, but when it's wrong, it's confidently wrong. I think it would be a crappy teaching aid since the student doesn't immediately recognize when the bot is wrong or why the code it produced doesn't work.",u/Kthulu666,https://www.reddit.com/r/css/comments/zudl9x/comment/j1ikchb/?utm_source=share&utm_medium=web2x&context=3,"I use chatgpt daily and it works wonders, if you know what you're reading. Otherwise, if you don't know something as a complete beginner and take chatgpt response as gospel, you're gonna be in a world of hurt when it starts lying to you giving 3 year old outdated information..",u/ipromiseimnotakiller,https://www.reddit.com/r/css/comments/17gcln8/comment/k6g1esr/?utm_source=share&utm_medium=web2x&context=3,"In that case it's great. And I like ChatGPT too. But a complete beginner doesn't see possible flaws in the solution. So there is the possibility they learn a bad practice. I use ChatGPT too sometimes, but you will need to look at the code. Don't just copy and paste.",u/cryothic,https://www.reddit.com/r/css/comments/16owij3/comment/k1tjfqg/?utm_source=share&utm_medium=web2x&context=3
HTML,N/A,N/A,N/A,N/A,52.97%,2,"1,183,299","1,140,227","7,284,841","786,699","2,055,453",746.33,118.12,0,0,0,46.5k,https://www.reddit.com/r/HTML/,i actually used chatgpt to some extent but it doesn't help more than giving directions. i could come up with a fairly okay layout with objects and movement with chatgpt but it doesn't do much more than that,u/cryothic,https://www.reddit.com/r/css/comments/16owij3/comment/k1tjfqg/,"ChatGPT is up to date as of 2021. That means that any information you get from it is already 2 years out of date. For fast moving languages like Golang, JavaScript, TypeScript, Rust, etc., that's too old. I've been able to make use of it because I have questions about setting up servers and how to refactor old Perl code, but other than that it's just not ready for primetime, yet, IMHO.",u/russlo,https://www.reddit.com/r/HTML/comments/11rb46v/comment/jc7yd4f/?utm_source=share&utm_medium=web2x&context=3,"One thing about ChatGPT is that it names its IDs and classes very specifically and rarely uses element-level styles. In my experience, it will give an element a class even if it's the only one on the page. I'm not sure if this practice differs based on the version.",u/steelfrog,https://www.reddit.com/r/HTML/comments/17knwvb/comment/k7943pw/?utm_source=share&utm_medium=web2x&context=3
JavaScript,✅,✅,✅,✅,63.61%,1,"2,518,260","6,390,411","22,397,798","6,753,636","23,751,668",486.2,87.82,88,24.7,22,2.4m,https://www.reddit.com/r/javascript/,ChatGPT for faster and consise search results and thats all .Co Pilot isn't my cup of tea.,u/Ok-Hospital-5076,https://www.reddit.com/r/javascript/comments/17o0p9o/comment/k7xhnws/?utm_source=share&utm_medium=web2x&context=3,"i use chat gpt occasionally instead of google, its ok for some small specific functions but it just saves me 10 minutes here and there. i can definitely imagine my life without it since i often forget it exists",u/andeee23,https://www.reddit.com/r/javascript/comments/17o0p9o/comment/k7ww57w/?utm_source=share&utm_medium=web2x&context=3,"I use copilot for autocompletion and chatgpt as sort of a ""documentation oracle"". gpt4 gives ""ok"" code, but it where it really shines is asking it to explain something or write a simple implementation.",u/alphabet_american,https://www.reddit.com/r/javascript/comments/17o0p9o/comment/k7wvdjl/?utm_source=share&utm_medium=web2x&context=3
1 language multiple babel mbxp humaneval_x so_2023_language_percent so_2023_language_rank so_tags github_prs github_pushes github_issues github_stars stack_gb codeparrot_gb alphacode_gb codegen_gb polycoder_gb subreddit_members subreddit_url anecdote_1_content anecdote_1_author anecdote_1_url anecdote_2_content anecdote_2_author anecdote_2_url anecdote_3_content anecdote_3_author anecdote_3_url
2 Erlang N/A N/A N/A N/A 0.99% 38 9,621 70,890 249,209 49,786 127,120 Unspecified 0 0 0 0 9.5k https://www.reddit.com/r/erlang It seems like ChatGPT doesn't know that much Erlang. u/Ranugad https://www.reddit.com/r/erlang/comments/11kl57z/comment/jbbw94t I recently asked ChatGPT to translate some Erlang code into Elixir. Here’s an edited transcript, for your amusement and edification… Rich_Morin https://elixirforum.com/t/asking-chatgpt-to-translate-erlang-to-elixir/53548 I don’t think anything automated is going to work well. ChatGPT might be interesting but you’ll almost certainly have to fix it up quite a bit. https://learnxinyminutes.com/docs/erlang/ gives a quick rundown on erlang syntax/semantics and https://learnyousomeerlang.com/ is a good book on it u/boy-griv https://www.reddit.com/r/AskProgramming/comments/10tave8/comment/j78bvj5
3 Julia N/A N/A 1.15% 37 12,402 39,305 166,898 51,276 52,326 3.09 0.29 0 0 0 23.9k https://reddit.com/r/julia I usually start my own articles with ChatGPT but the truth is that right now, if you want to say something interesting in the Julia space, you mostly need to write it yourself since the volume of content about Julia out there isn’t enough for the outputs of ChatGPT to be very useful since our ecosystem is so small. u/LoganKilpatrick1 https://www.reddit.com/r/Julia/comments/zzvkso/comment/j2i6knx/ It wasn't trained on sufficient Julia code. As with any machine learning model, ChatGPT is only able to regurgitate what's been fed into it. Also, this behaviour happens with basically every other topic, too. LLMs work by trying to predict what the next word in a sentence would be based on the previous string of words. If a sentence is incomplete, it's going to add a next word. That word is going to be whichever has the highest confidence score, regardless of low that score may actually be. This results in it just making shit up, but often shit that sounds plausible. We've seen CGPT invent academic articles, books, and even entire people because it makes sense to in the sentence it's generating.` u/Kichae https://www.reddit.com/r/Julia/comments/112wlle/comment/j8mpgx5/ I suspect the current language model behind ChatGPT was fed with a lot of code examples from Stack Exchange, but the Julia community mainly uses Discourse instead, which probably wasn't in the training set: https://discourse.julialang.org/ u/Paravalis https://www.reddit.com/r/Julia/comments/112wlle/comment/j8qzc0j/
4 Clojure N/A N/A N/A N/A 1.26% 36 17,630 112,757 518,359 84,128 272,970 Unspecified 0 0 0 0 31.5k https://www.reddit.com/r/Clojure I've been using Copilot since December 2022. It sucks for Clojure but can be great for other languages like Python, JavaScript, SQL, etc. if you know how to prompt it. As other have mentioned, Copilot excels at reducing boilerplate and picking up on patterns. For example, lets say there is a table of data in a markdown document and you want to convert it to a vector of maps. You can copy/paste the markdown table into your buffer as a comment and just start writing the data structure you want it to be, Copilot will figure it out and complete it. Its also useful for generating random utility functions. Recently in JavaScript, I typed `function lerp` (linear interpolation) and it pretty quickly filled it in. I had an array of hex color values that I wanted to be RGB and I wanted to double the number of values by interpolating between them. All I had to do was type that in a comment and wait a second before it gave me a working rough draft of the function. Copilot can actually do a lot of these things for Clojure but when I was trying to use it I found myself consistently having to fix issues with delimiters, typically round braces. Eventually, I just gave up on it. Maybe I'll give it another shot when Copilot-X releases. ChatGPT is much more useful for Clojure than Copilot. It does hallucinate and get some things wrong but overall its awesome for generating documentation, explaining code, translating diffs into PR notes, and exploring ideas. I've found it very useful for random Java questions and then translating the answers into mostly working Clojure code. These things are handy tools and have quirks but they're going to get better. It's a great time to be a cosmopolitan (polyglot) programmer. u/noprompt https://www.reddit.com/r/Clojure/comments/148nhuj/comment/jo2z2n8 No Clojure. No Julia. No Haskell. No Racket. No Scheme. No Common Lisp. No OCaml. And, as much as I despise Microsoft, No C#. No F#. No Swift. No Objective-C. No Perl. No Datalog. A glaringly lacking choice of languages. waffletower https://news.ycombinator.com/item?id=35803856 FizzBuzz was once a common programming exercise used for screening software developers (maybe it still is?) I told chatGPT to "Write an efficient fizz buzz function in Clojure". @EricTheTurner https://x.com/EricTheTurner/status/1600344406166380544?s=20
5 Solidity N/A N/A N/A N/A 1.33% 35 6,669 0 0 0 350 0 0 0 0 0 17.0k https://reddit.com/r/solidity ChatGPT is awful at smart contract, the data is years out of date, and it tend to override and make functions that are unnecessary. Even worse it overrides safe good functions for unsafe inefficient functions. Speaking of inefficiency it will seriously de-optimize optimized code, even when asked to gas optimize it. u/Adrewmc https://www.reddit.com/r/solidity/comments/142amjb/comment/jn48x8v/ Despite the mixed results, ChatGPT, aka GPT-3.5, is a step forward in the direction of writing code with an AI assistant. I actually enjoyed doing these little experiments. However, compared to other experiments I did with JavaScript and other languages, a clear takeaway from my efforts is that when it comes to the Web3 space, GPT doesn’t yet have enough accuracy. In fairness, there is far less available Solidity and Web3-related JavaScript code in the wild than there is general-purpose JavaScript code. Plus, the web3 industry is constantly changing, which makes the problems of ChatGPT relying on an old dataset much worse. . On the positive side, generating an ABI from Solidity is something it did well, which shows it can learn from the available snippets the general rules to create something new. Lorenzo Sicilia https://outlierventures.io/article/can-chatgpt-really-be-trusted-to-write-a-smart-contract-or-to-refactor-your-existing-solidity-code/ Can someone please make an open coder model trained on Solidity u/thatdudeiknew https://www.reddit.com/r/LocalLLaMA/comments/14qednx/comment/jqmq2t5/?utm_source=share&utm_medium=web2x&context=3
6 Lisp N/A N/A N/A N/A 1.53% 34 6,945 8,431 73,903 12,870 47,157 Unspecified 0 0 0 0 37.7k https://www.reddit.com/r/lisp Chat gpt is known to lie and be confident in its incorrectness. Also, try telling it to convert a program from lisp to python that uses advanced features like the condition system. u/KaranasToll https://www.reddit.com/r/lisp/comments/138aovs/comment/jixfrkr/ How do you think the advent of ChatGPT and Copilot would affect the adoption and popularity of Common Lisp, Clojure and Schemes? On one hand, Large Language Models did not have access to these "niche" languages for training as much as the more popular alternatives like Python and Typescript so the quality of their output would be worse in comparison. On the other hand, the "interactive" aspect of LISP in that you code stuff, test in REPL and code again would not be so unique since the developer can just use the chat system to refine his solution. The other upside that LISPs had over the likes of Rust and C++ is the lack of syntax clutter and cleanness of s-expressions. In this front too, they would hurt from the likes of ChatGPT since the syntactic complexity is handled by the LLM not the developer. u/friedrichRiemann https://www.reddit.com/r/lisp/comments/11lwwv1/possible_effects_of_aiassisted_tools_on_lisps/?utm_source=share&utm_medium=web2x&context=3 I'm an engineer working in the construction field, and I'm currently trying to create a Lisp routine for a project I'm working on. I've been trying to use GPT to generate the code, but I'm having some trouble getting it to work properly. I was wondering if anyone knows of a pre-trained GPT that has been specifically trained on Lisp code. I've been searching online, but I haven't had any luck so far. If anyone knows of a pre-trained GPT with Lisp, or has any tips for training my own GPT on Lisp code, I would really appreciate the help. /u/Fine_Impression_3171 https://www.reddit.com/r/ChatGPT/comments/12o4k1n/looking_for_pretrained_gpt_with_lisp_autocad/
7 GDScript N/A N/A N/A N/A 1.71% 33 906 561 3,692 1,615 9,953 Unspecified 0 0 0 0 147k https://www.reddit.com/r/godot/ Irrational AI hatred aside, none afaik, godot 4 is too new. When trying to figure out some kinks in my code it kept giving me garbage mixed with outdated godot 3 code. Don't bother, it's faster to just do it yourself for now. It's kind of annoying because in my experience for beginner devs, AI can be a huge help in explaining why your code no worky and how to improve it. It allowed me to go much further in my C++ projects that I thought and saved a ton of time spent on research or debugging. u/Merosian https://www.reddit.com/r/godot/comments/17nv29g/comment/k7w2nrx/?utm_source=share&utm_medium=web2x&context=3 I was playing with this yesterday and had some difficulty getting it to produce GDScript instead of Python. It insisted the Python code it generated was GDScript haha. Otherwise it made exactly what I wanted, just in the wrong language. u/[deleted] https://www.reddit.com/r/godot/comments/zf6tve/comment/izaiw13/?utm_source=share&utm_medium=web2x&context=3 You can: Fine-tune ChatGPT. If you're willing to pay, I might help setting it up at some point in the near future. Use a different model like Llama-2 (open-source) which has more recent data (which you can also fine-tune), or from companies like Anthropic/Claude etc. Look (and contribute?) to godot-dodo and Godot Copilot. Export the Godot docs to a PDF and use some plugin, I guess? Never tried it. Copy-paste the GDScript reference page, which will likely improve it's zero-shot predictions. u/kmouratidis https://www.reddit.com/r/godot/comments/16j7u9k/comment/k0odex1/?utm_source=share&utm_medium=web2x&context=3
8 Haskell N/A N/A N/A 2.09% 32 50,979 106,539 646,012 146,857 306,235 6.95 1.85 0 0 0 76.1k https://www.reddit.com/r/haskell/ I've been generating a ton of Haskell code with it and it's been fantastic. I have a driver for content addressable storage in my side project, it's pretty simple, but it still took me a few hours each to implement local filesystem and MinIO drivers with tests and ChatGPT did the bulk of the work for Redis and LevelDB implementations in minutes. I've also found it much easier to work with on Haskell code than on python or JS. Obviously some of this is the usual reasons why I would find Haskell code easier to deal with than dynamic languages but I think that the effect is amplified with ChatGPT because the "if it compiles it works" affect gives me much more confidence that what it generated isn't missing anything important than with the other languages, so I can move much faster. u/lgastako https://www.reddit.com/r/haskell/comments/zede58/comment/iz68s9c/?utm_source=share&utm_medium=web2x&context=3 Personally, I've been using Copilot mostly in Ruby (work...) and Haskell, and it is much better at predicting Haskell code. I think it's because Haskell has so much context (type signatures, purity, only imported modules are in scope) which greatly restrict what you can do in a particular function and thus Copilot's suggestions seem to be much more often in line with what I wanted to write. u/qqwy https://www.reddit.com/r/haskell/comments/16o5u8e/comment/k1jc68v/?utm_source=share&utm_medium=web2x&context=3 Here, I present the (lightly edited) story of using ChatGPT conversationally to solve a non-trivial problem in Haskell. It definitely gets some things wrong, and it’s still unclear whether co-developing this with ChatGPT made anything easier than it would have been otherwise. But in any case, it was definitely a different and less lonely experience than just programming on my own. Chris Smith https://cdsmithus.medium.com/pair-programming-with-chatgpt-haskell-1c4490b71da6
9 Objective-C N/A N/A N/A N/A 2.31% 31 292,409 263,146 1,172,307 397,275 3,003,177 Unspecified 0 0 0 0 7.0k https://www.reddit.com/r/ObjectiveC/ N/A N/A N/A N/A N/A N/A N/A N/A N/A
10 Elixir N/A N/A N/A N/A 2.32% 30 9,510 113,018 255,430 65,166 210,145 Unspecified 0 0 0 0 27.1k https://www.reddit.com/r/elixir/ One day, I needed to implement a priority queue with amortized O(log n) decrease-key operation in Elixir, but I didn't know how, so I consulted Monica (which interfaces GPT-3, I think), and it gave me the code of a whole Elixir module that is absolutely wrong. It was a binary heap implemented using a single list as if it's a mutable array. Furthermore, it won't even compile! I tried to correct the "mistake" GPT made, so I told it more about Elixir, about immutability, about lists in Elixir. I even tried to "inspire" GPT to write other kinds of heaps, like binomial heap and pairing heap, but GPT is so stubborn (though very polite) that it keeps giving me almost the same code over and over again. At last I gave up on GPT and turned to StackOverflow, and just a few words enlightened me (FYI, it's two heaps, one for insertion, one for deletion, and when the top nodes in both heaps have the same key, cancel them out). My conclusion is: AI is useless in some domains when it doesn't have enough learning material in those domains. u/a3th3rus https://www.reddit.com/r/elixir/comments/16vrhr6/comment/k2xel5z/?utm_source=share&utm_medium=web2x&context=3 Using ChatGPT when programming with Elixir can bring several advantages. One of the most significant advantages is that it can provide quick and accurate responses to various programming queries, including syntax and documentation. This can help programmers save time and improve their productivity. Additionally, ChatGPT can offer personalised and adaptive learning experiences based on individual programmers’ skill levels and preferences. This can help programmers learn Elixir more efficiently and effectively. u/erlangsolutions https://www.reddit.com/r/elixir/comments/13xeh8w/how_chatgpt_improved_my_elixir_code_some_hacks/ The question is: how much boilerplate code do you really write? Elixir compared to other languages has little to none boilerplate, and for moments such as phoenix things, there are configurable generators. I wouldn’t want an AI incapable of problem solving to generate complex code for me, because as tempting as it seems, the productivity decreases a lot if we talk about refactoring generated code compared to creating your own new code. D4no0 https://elixirforum.com/t/get-ai-code-generation-tools-to-create-correct-elixir-code-or-else/53931/2
11 Perl N/A N/A 2.46% 29 67,938 125,129 634,214 117,426 188,697 5.5 4.7 0 0 0 16.4k https://www.reddit.com/r/perl/ There are a few problems with this, and I noticed the exact same thing with the GitHub Copilot. It's barfing out examples it was trained on with no idea about what they do, whether they work, and if they are current. Transaction objects no longer have a success method. This was deprecated for a long time ago and finally removed in version 9. The error method returns a single value. Minor problem, but still cruft that shouldn't be there. Call json on the response to get the data structure rather than doing this yourself. Even then, using JSON directly, while fine, skips over the Mojo::JSON::decode_json. It's a bit of a pain in the butt, but work hard to use the same parser everywhere in an application since they tend to have slight differences (say, like how they represent null, true, or false). Somewhere along the line, ChatGPT saw this code or something very similar. It then returns it to you with no intelligence about what it is doing or what it should do. It's very likely that the source ChatGPT saw is not only old, but also unsophisticated. You're likely just cargo-culting off StackOverflow with extra steps. But, this also isn't the way you probably want to write code. You don't want to return the token really, You want to add that to the user-agent so it provides it in every request without additional code from you. I have plenty of examples in Mojo Web Clients. That's another problem with the source material for these sorts of things: it's training itself off public data, but often our examples are mere demonstrations of ideas rather than advice on reliable software engineering (since we aren't going to write a book for every question someone asks). u/briandfoy https://www.reddit.com/r/perl/comments/10j0k00/comment/j5ki948 "Somewhere along the line, ChatGPT saw this code or something very similar. It then returns it to you with no intelligence about what it is doing or what it should do." IMO, this is quite irrelevant, because you must understand that whatever output - be it code, poems or whatever - from an AI-assisted service is not perfect. The main point is: it helps. And that's its main selling point today, because that's how StackOverflow also works: sometimes it's perfect, but most of the times it just helps, maybe because you have addressed the wrong audience, didn't word your question/problem correctly or otherwise. With ChatGPT you get an instant reply, and you can ask it to refine its reply. Instantly. Rinse and repeat. So if it use StackOverflow data (which I assume it does) it's already better in the sense that it's instant and filters out noise, especially personal attacks, or otherwise replies that intimidates the person asking the questions. "It then returns it to you with no intelligence about what it is doing or what it should do." Let's be honest, we have all been there and/or we have had colleagues who fits that description. :) u/nobono https://www.reddit.com/r/perl/comments/10j0k00/comment/j5l9s1c/ You mentioned being new to perl and programming. Personally, I think ChatGPT is a great resource for these types of question. I asked it your question and copied the function from csv2fasta.pl u/its_a_gibibyte https://www.reddit.com/r/perl/comments/14capfv/comment/jol2a4b
12 Scala N/A N/A 2.77% 28 111,969 605,988 1,508,526 271,184 540,327 14.87 3.87 4.1 0 1.8 51.3k https://www.reddit.com/r/scala/ Today I decided to test it by asking how one would use Scala 3 macros to get the types of the inputs and outputs of a method. It had some decent suggestions to do that for someone that is new to macros, but a lot of its answer was false, suggesting people use something called QuotesContext, not recognizing properly what extension methods are available for the Symbol type, and worst of all, trying to splice Type values into an Expr. If they can manage to get chatgpt to actually tell the truth consistently (like saying "I don't know how to do that" rather than just lying) I think it will be a nice resource for discovering how to do stuff you don't currently know how to do. Sadly, it's still got a nasty habit of making stuff up. u/markehammons https://www.reddit.com/r/scala/comments/124ocqh/scala_and_chatgpt/ Well...this is a very hold thread but I'm using the latest copilot for scala available today of this post. I mostly use the ZIO framework. I was skeptical at first but I'm finding the suggestions get smart quickly and it is generating a lot of code fragments pretty well. I'm not claiming I can live without it, but as of today, I'm thinking it works pretty well for my scenarios. I could easily see not wanting to code without in the near future. I think using a framework like ZIO makes it easier to generate code fragments because the ZIO framework has a fairly predictable surface area, but that's just a guess. u/agilesteel https://www.reddit.com/r/scala/comments/ovoc8n/github_copilot_for_scala_does_it_work/ I wanted to start a new Scala project based on Clean Architecture aka dependency inversion. So I asked for a basic example to demo the principles. There was a lot of pretty code but ultimately it had no idea what this was about. The code was bs. u/k1v1uq https://www.reddit.com/r/ChatGPTCoding/comments/zpunkt/comment/j25ftsr/?utm_source=share&utm_medium=web2x&context=3
13 Delphi N/A N/A N/A N/A 3.23% 27 51,475 310 552 0 0 0 0 0 0 0 3.8k reddit.com/r/delphi PSA: GitHub Copilot works with Delphi u/EasywayScissors https://www.reddit.com/r/delphi/comments/wnhk9x/psa_github_copilot_works_with_delphi/?utm_source=share&utm_medium=web2x&context=3 As you can see, it is possible to use an AI for simple pieces of code to create basic Delphi code quickly. We can now go one step further and implement this in Delphi itself. Marco Geuze https://gdksoftware.com/knowledgebase/delphi-and-chatgpt I asked a series of Pascal programming questions to an AI chatbot system while testing its abilities, and the following page is a record of its responses. u/sysrpl https://www.reddit.com/r/delphi/comments/1006ybh/programming_pascal_using_an_ai_chatbot/?utm_source=share&utm_medium=web2x&context=3
14 Groovy N/A N/A N/A N/A 3.40% 26 30,014 132,381 431,291 108,265 140,122 Unspecified 0 0 0 0 3.0k https://www.reddit.com/r/groovy/ And that it was possible to use the code created by the tool to generate some code that could be used to start your programming. This could save quite a bit of time for developers to use this as a starting point, and you don’t need to have a large experience to start coding in UDFs in Groovy. It is also interesting that it has much knowledge about what is going on in an SAP universe, I would have thought it was more difficult to get data about it. Figaf https://figaf.com/chatgpt-groovy-code-help-for-sap-cloud-integration/ Groovy is a great language with a ton of utility, and can scale like crazy! Write code as dynamic as you want, and choose to refactor into a more type-safe manner later. It's totally worth learning and having it in your toolkit. I program in it every day for many projects. All Java (99.9%) is also valid Groovy, so it's almost impossible not to understand and work with any Java code base you may come across once you get familiar with Groovy. ChatGPT and Github Co-pilot also write excellent Groovy code, which can aid you in learning, and just programming with it in general. It's still actively maintained, too! It's not going away an time soon. u/West_Performance_129 https://www.reddit.com/r/groovy/comments/16kuh6s/comment/k1i0lqn/ When I was building react-native-colo-loco, I had to write a Gradle script, which is written in Groovy. I know a little Groovy, but not much. So I focused on writing precise, accurate comments, and let Copilot suggest lines of code. I could then lean on my development experience to pick up on patterns and syntax, and go from there. Jamon Holmgren https://shift.infinite.red/getting-the-most-from-github-copilot-8f7b32014748
15 VBA N/A N/A N/A N/A 3.55% 25 212,313 22,482 77,915 17,439 19,273 2.73 1.91 0 0 0 52.3k https://www.reddit.com/r/vba/ It depends on how you use ChatGPT though. I started a VBA project using methods I had used in the past. When that didn’t work, I tried the Google approach, and still couldn’t do what I wanted. Then, I remembered that ChatGPT does code, and decided to give it a shot. Honestly, what it gave me was riddled with errors, but I went through error by error and forced the AI to come up with corrections. I would copy-past the code into the prompt and ask it to identify potential errors and explain how they could be fixed. I got a really intimate understanding of the code, the reasons for the errors, and the strategies for correcting them. Even then, the code was flawed and ultimately failed. But I was able to use some of what I picked up throughout the process to build my own foundation for the code that would eventually work and used the AI to help fill in the blanks. I got a lot out of the experience. It’s very important to ask very specific questions and to make sure that you understand the recommendations that it makes so you don’t get lost in later steps. I used Google to supplement some of the information the AI gave me to improve my understanding. I spent a lot of time with this thing, and I think we both came out of it just a little better at what we do. u/imartnm https://www.reddit.com/r/vba/comments/108zy8k/comment/j3zcukr/?utm_source=share&utm_medium=web2x&context=3 I've tried using it for VBA/Power Query code, but it's spotty at the best of times. It sometimes will reference functions that don't exist, or will ignore the carefully worded instructions you give it. At its current state it's most useful as a glorified google /stackoverflow search. It can also be helpful while debugging or just to throw some suggestions your way. Writing out the basic structure of my module and asking for recommendations/alternatives to certain implementations is fun and has taught me some new tricks. So it's cool, but not really reliable. Don't let it write your code for you or you might risk spending more time fixing it than you would have just writing it. I'd say it's VBA capabilities are better than its grasp on PowerQuery (M) . u/Confuciusz https://www.reddit.com/r/vba/comments/108zy8k/comment/j3wn54u/?utm_source=share&utm_medium=web2x&context=3 Lol I just made a comment on another similar post where OP said GPT was incredible for Excel 😂 But yeah, GPT is still awful for VBA or long formulas. I tried giving clear instructions for simple tasks that it couldn’t get right. It’s cool, but long way to go u/E_Man91 https://www.reddit.com/r/vba/comments/123zuo6/comment/je3ixwy/?utm_source=share&utm_medium=web2x&context=3
16 MATLAB N/A N/A N/A N/A 3.81% 24 94,777 23,655 266,359 33,289 84,982 Unspecified 0 0 0 0 53.2k reddit.com/r/matlab Yep, pretty much all the MATLAB code ChatGPT write for me worked. There was one instance whereby there was a multiplication that went away as it used * instead of .* To multiply two vectors. When I pointed that out, it corrected the code. In this case it was an order of operations issue and it correctly got it sorted by adjusting the parentheses. Pretty impressive so far. u/worblyhead https://www.reddit.com/r/matlab/comments/12fwjx5/comment/jficv03/?utm_source=share&utm_medium=web2x&context=3 Yes, you can use Co-Pilot with Matlab code. However, it won't work with the usual MATLAB IDE, so you have to use one of the supported IDEs (e.g. VS Code or JetBrains). u/Latter_Trouble_3227 https://www.reddit.com/r/matlab/comments/y07uop/comment/jbgoj6h/?utm_source=share&utm_medium=web2x&context=3 Why would you think such a simple plot with callback on click would not work? Now I wonder if it made the callback zoom-safe. I was using update callbacks after only 8 months of college experience with Matlab. And yet, I can’t make chatGPT to give me the correct answer to a function inverse involving rational polynomials (at least the steps it got right, allowed me to remember how to do function inverses) u/LevelHelicopter9420 https://www.reddit.com/r/matlab/comments/12fwjx5/comment/jfll3tu/?utm_source=share&utm_medium=web2x&context=3
17 VB.NET N/A N/A N/A N/A 4.07% 23 335,092 15,653 35,848 2,915 0 Unspecified 0 0 0 0 145k https://www.reddit.com/r/dotnet/ What I've seen from gpt and copilot is that it's a good junior and sparring partner, but it's no substitute for a senior. It lacks reasoning and analytical capabilities to be a true senior. For example, it can tell you the difference between mediator and nservicebus (dotnet environment), but it cannot explain which one you should use for the project you are working on. u/KenBonny https://www.reddit.com/r/dotnet/comments/16j8il5/comment/k0qjb6u/?utm_source=share&utm_medium=web2x&context=3 I've been using it for a LOT of utility classes, regex expressions, and things like that. It's nowhere near replacing my job yet but it's saved me countless hours on some rather trivial but tedious tasks. Most recent today was a function that converts a string to camel case, worked perfectly right out of the gate. Yea I probably could have found the same function on google in 10 min, but I would have had to comb through ads, and useless posts on stack overflow, before I found one I knew would be performant. It's not laziness, the rest of my job is busy enough, I could have spent an hour or two figuring out the logic from scratch but simply put, this is a far more efficient use of my time. u/Ch33kyMnk3y https://www.reddit.com/r/dotnet/comments/10s8eld/comment/j704bu4/?utm_source=share&utm_medium=web2x&context=3 Yeah, I just use the free version but I'll ask it to do something, it kinda does it, I ask, "Is this part necessary?" It then responds with oh you're right and redoes it but in a way that still has questions, like I wanted it to explain why it did something the way it did and it takes that as I'm saying it's not really needed. Then I ask it to explain the new changes and it reverts things to the way it did them before thinking I spotted an error in how it redid the code. 🤦‍♂️ I still think it's a nice option to springboard learning or get quick explanations of things with examples, but the more I've used it the less I'm convinced it'll be stealing my job anytime soon. What I actually fear more are engineers and/or middle managers who don't know any better trusting everything it suggests who then think this makes engineers less needed or useful. u/ModernTenshi04 https://www.reddit.com/r/dotnet/comments/15od4zx/comment/jvr5vur/?utm_source=share&utm_medium=web2x&context=3
18 R N/A N/A 4.23% 22 499,872 51,800 506,309 88,649 91,654 Unspecified 0 0 0 0 36.8k https://www.reddit.com/r/Rlanguage/ It's even helpful for example datasets. If you want to test or play around it will create a dataframe example. Also if you know one programming language it can help translate. It will even rewrite the code to look better. E.g. write this code in python pandas but make it more readable like r dplyr. Anything regex is nice as I don't have to hope a specific example is on stack overflow. Chat cpt from my experience will often favor going things with for loops instead of taking advantage of dplyr or pandas functions. With everything chat gpt tho check the code as it will confidently give you an answer and even print out a fake output. Often pointing out its error gets chatgpt to fix the code. u/2truthsandalie https://www.reddit.com/r/Rlanguage/comments/17q56xq/comment/k8b2phr/?utm_source=share&utm_medium=web2x&context=3 I have found it hit and miss. I was able to knock up simple Shiny apps in a minute (https://youtu.be/8oJ1HtkpDt0) but have had it write non-sense code for some other things I was trying (especially webscraping). GPT Studio is pretty good (demo here https://youtu.be/QQfDTLExoNU) but has someone else mentioned, take a look at Github Copilot u/DrLyndonWalker https://www.reddit.com/r/Rlanguage/comments/17q56xq/comment/k8bi6nq/?utm_source=share&utm_medium=web2x&context=3 I do it constantly, not only for debugging which it is spectacular at, but for especially tedious things like using ggplot. If you can think it, GPT-4 and the other specialized models can code it. The real key is to put thought into the question you want to answer with the code and then to very deliberately tell the GPT what to do. For example, “I have a data frame with x, y, z variables. Please write R code to perform a, b, c statistical analysis. Place the results into a variable called results.” And so on. u/jrdubbleu https://www.reddit.com/r/Rlanguage/comments/17q56xq/comment/k89wmhi/?utm_source=share&utm_medium=web2x&context=3
19 Swift N/A N/A 4.65% 21 331,145 425,921 1,334,455 325,962 2,731,776 Unspecified 0 0 0 0 107k https://www.reddit.com/r/swift/ Just a general tip: even though it's a bit out of date, chatgpt will answer these questions much faster and sometimes more accurately than Reddit can. I've pretty much replaced Google with chatgpt and my productivity is up and stress is down. For questions about the newest SwiftUI stuff try Google Bard. The LLMs aren't perfect. There's still a place for Reddit and stack overflow, but I'd check with an LLM first. u/[deleted] https://www.reddit.com/r/swift/comments/174vuyo/comment/k4eayl9/?utm_source=share&utm_medium=web2x&context=3 I've tried copilot with SwiftUI and it's good for auto generating some things like specific styles, but not so good for other parts. Sometimes it helps with unit tests,but others it gets stuck in a loop. u/Zagerer https://www.reddit.com/r/swift/comments/13929qe/comment/jj0pti9/ Here is my journey coming from C++: Read through "A Swift Tour" and follow along in a Swift Playground. Many times, I feel, "Huh, this part is so much better than C++.", or "This is pretty much the same," I don't force myself to learn everything though, for example, I skipped protocol entirely. This process took me a few hours. As I dug into SwiftUI, I ran into syntax I didn't understand. Instead of looking up the official document, I just Google or ChatGPT it. When I start doing things in a C++ way that I always hate, I often pause and search if Swift does it better. Oftentimes times, Swift does do it better! Still, I carry some baggage from C++ and later notice if I had done it differently, I would have saved myself a lot of trouble (for example, really thinking about whether things can be null or not). Don't be afraid of re-writing; it is part of the process. Today, I am still learning; however, I started to catch myself speaking in C++ "accent" using Swift, and oftentimes, I can Google/ChatGPT my way out of it. u/AppleHitMyHead https://www.reddit.com/r/swift/comments/1724gke/comment/k481769/?utm_source=share&utm_medium=web2x&context=3
20 Assembly N/A N/A N/A N/A 5.43% 20 43,572 14,301 119,341 10,605 50,063 2.36 0.78 0 0 0 16.2k https://www.reddit.com/r/asm Assembly isn't one language, it's a general term for any human-readable representation of a processor's ISA. There are many assembly languages, and there are even different representations of the same ISA. I'm not sure what your book you're using but there are operand order differences between AT&T and Intel x86 (although your example looks like AT&T). You shouldn't be using ChatGPT for any subject you aren't already familiar with though, or you won't be able to recognize when it's hallucinating, or even when it's simply lacking context. Just use a normal, reputable resource like the book you're following. I recommend checking out this wikibook for free online: https://en.wikibooks.org/wiki/X86_Assembly u/the_Demongod https://www.reddit.com/r/asm/comments/14q5qi8/comment/jqlmfvn/?utm_source=share&utm_medium=web2x&context=3 ChatGPT makes a good attempt, but it doesn't actually understand code — ESPECIALLY assembly language, where each instruction exists in a lot of context — and will usually have some kind of bugs in anything it writes. u/brucehoult https://www.reddit.com/r/asm/comments/14q5qi8/comment/jqp8rig/ Idk why all the chatGPT comments are all downvoted, guys it is inevitable that it is going to be a standard part of our lives now. The sooner students start using it the sooner people will realize its limitations. It is a great learning tool and I use it when learning a new subject. u/dvof https://www.reddit.com/r/asm/comments/105vl0v/comment/j3hn8xp/?utm_source=share&utm_medium=web2x&context=3
21 Dart N/A N/A N/A 6.02% 19 91,732 171,518 230,340 241,706 264,888 Unspecified 0 0 0 0 39.8k reddit.com/r/dartlang The amazing thing about LLMs like ChatGPT is that they develop a kind of "language sense" and "know" how to stick together the right tokens to achieve a certain goal. They don't "understand" Dart - or any other programming language. They just emit tokens that I probably want to see :) Also, we cannot fully comprehend the amount of data that has been processed. Billions and billions of lines of code in dozens if not hundreds of languages. u/eibaan https://www.reddit.com/r/dartlang/comments/142fbkc/comment/jnoc1ph/?utm_source=share&utm_medium=web2x&context=3 Please note that ChatGPT is not sure about anything. It communicates that it knows what it says is true, but it's known to make up facts. Luckily the answer to your question is in the Dart docs. Alternatively StackOverflow has a sensible answer: https://stackoverflow.com/questions/57936263/dart-set-from-vs-set-of u/Rusty-Swashplate https://www.reddit.com/r/dartlang/comments/10yiu7d/comment/j7yflw0/?utm_source=share&utm_medium=web2x&context=3 antastic recommendations. I actually did have ChatGPT help me override toString for a ton of these classes nested within classes in this giant object I'm trying to print so I can mock. Didn't think to tweak the toString method like that. Not sure I understand your quoted getter though with the slashes. I'll play around with it Monday though. u/john2046 https://www.reddit.com/r/dartlang/comments/1390c2j/comment/jj0spnc/?utm_source=share&utm_medium=web2x&context=3
22 Lua N/A N/A 6.09% 18 22,413 139,939 717,566 166,471 366,575 6.58 2.81 2.9 0 0 19.0k https://www.reddit.com/r/lua/ First of all, don't use ChatGPT if you want to learn Lua. Refer to the well-written resources such as the "Programming in Lua" book instead. u/appgurueu https://www.reddit.com/r/lua/comments/11dkwdl/comment/jacqn3z/?utm_source=share&utm_medium=web2x&context=3 Ask chatGPT to convert java / concepts into language to Lua... works surprisingly well u/gluecat https://www.reddit.com/r/lua/comments/12wj39f/comment/jhhg8qi/?utm_source=share&utm_medium=web2x&context=3 Do you not find Copilot frustrating? I cannot stand it, it's the worst thing for me. Whenever I've actually needed help with something, it's either: Gave me absolute garbage code. Missed the point entirely. Maybe I'm just bad at giving it instructions, who knows 😅 u/VitexHD https://www.reddit.com/r/lua/comments/13tfqs2/comment/jlytud8/
23 Ruby N/A N/A 6.23% 17 228,663 2,482,982 5,645,881 1,204,510 2,905,832 23.82 10.95 11.6 0 4.1 81.5k https://www.reddit.com/r/ruby/ Note that the failure mode for ChatGPT is that it will gaslight and lie to you. If you don't give it enough context, or the method names are ambiguous, there's a potential for it to make up explanations that sound plausible, but are dangerously incorrect. I'd advise talking to your team about the things that confuse you germane to your codebase, and only using ChatGPT for general Ruby content. u/throwaway-aso2fb https://www.reddit.com/r/ruby/comments/16y3bxq/comment/k36os5n/?utm_source=share&utm_medium=web2x&context=3 Not using copilot for the controversy around it stealing source code. Manager gave me a license however to use tabnine at the moment. In...basic scaffolding code it helps me speed up a bit by generating the blocks for example to write specs quickly, providing about 75% of the structure needed to get the spec fleshed out, e.g faster let declarations and do blocks. But for writing actual code I'm fighting it more than its helping me, since it simply doesn't understand what I am trying to write. Documentation is....hit&miss depending on whether it gets the meaning behind the variable names. u/OlivarTheLagomorph https://www.reddit.com/r/ruby/comments/zq847a/comment/j0yy2y8/?utm_source=share&utm_medium=web2x&context=3 I use Github copilot (which uses openai's codex) and occasionally throw some questions to ChatGPT. Currently I use it for Ruby and Kotlin. I committed to Copilot after trying it for five minutes. Total game changer. Time spent doing grunt work, writing repetitive tests etc, has dropped by 90% and I'm left with a lot more time to implement elegant solutions rather than throwing in quick fixes to meet tight deadlines. Sometimes it almost seems like it can read my mind. You still need to have the experience and expertise to ensure it hasn't missed the point - it doesn't always have the full context of the problems you're working on - but I would wholeheartedly recommend it to any developer as a way to increase productivity. u/onionionion https://www.reddit.com/r/ruby/comments/11usmxs/comment/jcqdd8q/?utm_source=share&utm_medium=web2x&context=3
24 Kotlin N/A N/A N/A 9.06% 16 92,664 346,824 816,744 174,810 545,403 Unspecified 0 0 0 0 73.8k reddit.com/r/kotlin chatgpt doesn't know that Kotlin can use java libraries, which makes sense since it knows nothing. Chatgpt doesn't know that you target older Android versions with new languages. The reason why there are more java programs for old reasons is just historical and doesn't benefit java in any way. But chatgpt will never understand this since it can't understand anything. Here chatgpt is correct. It's amazing how it can produce a correct answer without having any idea what it's doing. u/Feztopia https://www.reddit.com/r/Kotlin/comments/zo6jpo/comment/j0lv16b/?utm_source=share&utm_medium=web2x&context=3 If you want solid foundation, don't. ChatGPT is known for inventing things and confidently state it as if that's true, if you don't have knowledge to judge its output, you can't fully trust the answer. u/duongdominhchau https://www.reddit.com/r/Kotlin/comments/10tzne0/comment/j79pkls/?utm_source=share&utm_medium=web2x&context=3 Not mentioned yet, but I really believe ChatGPT and Copilot (and whatever is coming down the pike) really reduces the “learning a new language” hump for EVERY language, and definitely for Kotlin. Asking it to do idiomatic Kotlin usually produces quite good results, and asking it how to do a Java thing best in Kotlin definitely does well also. So every new Java developer will be adept at Kotlin even faster than before. u/LoveSpiritual https://www.reddit.com/r/Kotlin/comments/14bpuym/comment/jokay83/?utm_source=share&utm_medium=web2x&context=3
25 Rust N/A N/A 13.05% 15 39,147 400,875 947,751 239,196 941,468 40.35 2.68 2.8 0 3.5 256k https://www.reddit.com/r/rust/ I think programming is heading the same way as translation - a machine can give you a first draft, but experience is needed to verify and fix the resulting code. In the case of translation, many tools exist that will translate text from one language to another, but the results may be slightly or wholly inaccurate: knowledge of both the source and target languages is needed to verify the result. The same is applies to code generation by GPT. The combination of a human and machine will probably give better results, faster. But unsupervised code generation in a general sense is still a way off. u/remontantcoprology https://www.reddit.com/r/rust/comments/zgkuq6/comment/izi6p21/?utm_source=share&utm_medium=web2x&context=3 The issue is that most of the time the code wont compile or have UB so... It could be blazingly fast to give you text but if need 5 or 10 minutes per try to check is doing what i want i prefer to do the code myself and then i am sure is doing what i want. In other langs like Python maybe but in complex langs like C++ or Rust is not as good because of it complexity, i havent tried but in Rust you cant make a buble sort loop without swap(i, j) and GPT could try the usual aproach of array[i] = array[j] which wont work at all u/JuanAG https://www.reddit.com/r/rust/comments/zgkuq6/comment/izhfvi3/?utm_source=share&utm_medium=web2x&context=3 I searched the huggingface hub for some LLM to help Rust coding. But most of them just for python. does anyone knows some LLM for just for Rust. Or how to build one. thanks u/AbleEstablishment155 https://www.reddit.com/r/rust/comments/16iz3fj/is_there_a_specific_llm_for_rust_coding/?utm_source=share&utm_medium=web2x&context=3
26 Go 13.24% 14 71,541 2,642,302 4,859,219 1,815,979 7,318,078 118.37 19.28 19.8 21.4 15 224k https://www.reddit.com/r/golang/ Personally for me this is the completely wrong approach. Having the ai write it for you and then understand what it wrote is less than optimal. You should use chatgpt to ask questions, not write code if you don’t understand it. Use it as a mentor who can’t be busy to answer your questions. Not as someone who will complete your homework and then maybe you’ll try and understand it afterwards. If a student actually wants to learn a subject, do they get someone to complete their homework? You get what I mean? If your goal is to just complete a project in anyway. Then maybe might work but most likely won’t. You should understand and come up with the logic behind everything you write before letting ai write it for you. Copilot is good for predictable sequences, but most things logic wise it fails as it does not know implementation. u/vEncrypted https://www.reddit.com/r/golang/comments/16cs5md/comment/jzl928k/?utm_source=share&utm_medium=web2x&context=3 ChatGPT (mainly the UI) set a bad example, AI has been way more helpful to me for learning Go than going on Google or reading official docs, but not ChatGPT and rather Forefront, which can use GPT 3.5/4 or their own models but regardless they have a Internet Search function that uses the model to simply summarize dozens of actually real pages it found in a way that is easier for me to understand compared to the original, specially since I can keep chain-asking "what is this/what is that", and all from me explaining step-by-step with "janky" English and the full code. It also lists the pages it used so I can just click them and check it myself, (spoiler alert) it doesn't make as many mistakes as people think, even without search it does a great job understanding code, it won't usually solve more than basic problems and just keeps giving you different snippets to try but most of the time I end up fixing the issue because of the answers, even if the code doesn't work, I don't know how else to explain it. Of course my first language isn't English but I also learn almost entirely by example and docs don't usually have snippets for every little thing the code can do, it also sounds a bit advanced to me because it's just a lot of text with (programming/Go) terms that I usually don't use. u/DarkCeptor44 https://www.reddit.com/r/golang/comments/17okcs8/comment/k7zl74p/?utm_source=share&utm_medium=web2x&context=3 When I ask ChatGPT about it, it suggests model.go, view.go, controller.go etc. but says itself that the MVC concept does not exist in Go. So I'm interested how developer with some more experience than I in desktop apps would struct it. u/Prestigiouspite https://www.reddit.com/r/golang/comments/153pahy/comment/jsmdut2/?utm_source=share&utm_medium=web2x&context=3
27 PowerShell N/A N/A N/A N/A 13.59% 13 115,393 72,946 276,134 62,960 195,597 3.37 0.69 0 0 0 227k https://www.reddit.com/r/PowerShell/ No, as of now LLM is Just another tool in the toolbox. It makes good coders more effective. u/JesterOfSpades https://www.reddit.com/r/PowerShell/comments/13h8ak1/comment/jk3o7v7/?utm_source=share&utm_medium=web2x&context=3 ChatGPT is not a teaching tool. It isn't capable of understanding, so it cannot properly explain what it's doing. Anything it produces is suspect, because it isn't designed to produce working, clean, modern PowerShell code, it's designed to be a chatbot that puts words next to other words weighted by context clues. u/lanerdofchristian https://www.reddit.com/r/PowerShell/comments/171h3id/comment/k3s7ren/ I've had a mixed bag with copilot. Sometimes it has given pure gold that I didn't think about but other times it suggests super lazy things like += arrays instead of creating a non-fixed array and adding to it. OH the hands down biggest thing it has helped with is working with pester testing. Still learning about it but copilot has certainly helped a bunch. u/Eimee_Inkari https://www.reddit.com/r/PowerShell/comments/14jy6n1/comment/jpq3yg9/?utm_source=share&utm_medium=web2x&context=3
28 PHP N/A 18.58% 12 1,462,608 2,550,461 9,196,172 2,286,391 4,036,079 183.19 61.41 64 0 13 162k https://www.reddit.com/r/PHP/ I've tried Chat GPT as I've seen some Youtube videos where people act in amazement while saying, "wow! This s amazing, I just give ChatGPT a class and it gives me all the unit tests for it within seconds! Total game changer!". Yeah, doesn't work worth a shit, at least not for me. It'd easier to just write the unit tests than refactor what Chat GPT gave me. u/mdizak https://www.reddit.com/r/PHP/comments/13l0hgf/comment/jknw4z9/?utm_source=share&utm_medium=web2x&context=3 I am under the impression that the update frequency of PHP libraries has gone down since ChatGPT was released. My interpretation is, that many companies and developers are looking deeply into the AI stuff. And that is not in favor of PHP so that the attention is moving away from PHP solutions (at least temporarily). Once the AI dust has settled we will see the real impact AI has on the PHP market. Anything else what might be relevant was already posted by other members here, so I won't go there. u/mission_2525 https://www.reddit.com/r/PHP/comments/16yb0d9/comment/k3t6vwq/ Might not be the answer you're looking for but it probably wouldn't be hard to write your own PHPCS sniff for it. Edit: Here is ChatGPT going over how you'd write a sniff for it. I haven't tested it so you might need to modify it a little bit to get it working. u/soowhatchathink https://www.reddit.com/r/PHP/comments/14k6z6i/does_codesnifferecs_have_the_possibility_to/jq43c4z/?context=8&depth=9
29 C N/A N/A N/A N/A 19.34% 11 400,941 1,300,955 5,240,188 1,285,709 3,741,913 222.88 183.83 48.9 0 55 147k https://www.reddit.com/r/C_Programming/ Hard agree with the last part. ChatGPT & other AI tools can be pretty awful for non-trivial C code. It often spits out things that might work in other syntactically similar C-style, such as using string literals as switch cases, or concatenating string literals with the + operator. It's the worst nightmare for someone who's actively learning to code; it will confidently answer your question incorrectly, while sounding completely reasonable. u/MyuuDio https://www.reddit.com/r/C_Programming/comments/17rzzy9/comment/k8mqxv5/ ChatGPT is failing you twice. First, because it's telling you about a bogus problem. Second, because it is not telling you about a real problem. The bogus problem is the redeclaration issue. It's technically correct that you will get a diagnostic if you try to define the same local variable twice in the same scope. But the solution there is trivial: don't define it, just re-use it. The more pernicious problem is handling or not handling the failure of realloc. When you overwrite the list variable with the result of realloc there is the possibility that the result is NULL. In that case, you have "lost" your original pointer. u/aghast_nj https://www.reddit.com/r/C_Programming/comments/178cc4l/comment/k4z9cby/?utm_source=share&utm_medium=web2x&context=3 I've been using copilot for nearly two years now. For me it's just a nice auto complete. I don't think it ever solves anything for me. It just makes me faster, especially with repetitive shit. u/Meatball_Subzero https://www.reddit.com/r/C_Programming/comments/16geaal/comment/k078frr/?utm_source=share&utm_medium=web2x&context=3
30 C++ 22.42% 10 801,823 2,767,540 9,245,881 2,255,179 5,192,579 192.84 87.73 290.5 69.9 52 260k https://www.reddit.com/r/cpp/ I use ChatGPT for tools and libs where the documentation is horrendous and it’s a coin toss as to whether it confidently talks truth or nonsense. I don’t think it’s a good idea for beginners to be leaning on it as a teaching aid. u/RainbowWarfare https://www.reddit.com/r/cpp/comments/172vc4q/comment/k3z07sj/?utm_source=share&utm_medium=web2x&context=3 My experience with ChatGPT is that it sucks ass with C++. Anything beyond basic syntax and programming it just gets wrong. My typical interaction is to ask it something specific, then spend the next 3 queries clarifying and then the next few pointing out issues in the code or methodology. I cannot recommend. u/TheBrainStone https://www.reddit.com/r/cpp/comments/172vc4q/comment/k3z96kd/?utm_source=share&utm_medium=web2x&context=3 I have github copilot enabled in my ide, so whatever it suggests I can either use it or ignore. I find it helpful in writing docstrings and filling out somewhat repetitive rows (e.g. pattern matching cases). But otherwise it is not that clever. I also use chatgpt in some rare cases when I am curious how would chatgpt solve this or that problem. It is good to write some simple, short functions; but it is not reliable enough to write medium to very complex algorithms. u/Asleep-Dress-3578 https://www.reddit.com/r/cpp/comments/172vc4q/comment/k3zprne/?utm_source=share&utm_medium=web2x&context=3
31 C# N/A 27.62% 9 1,606,619 1,191,927 4,581,919 1,489,756 2,521,561 128.37 36.83 38.4 0 21 233k https://www.reddit.com/r/csharp/ AI tools give me the code I need maybe 20% to 40% of the time. Another 30% or so I have to tweak it to make it work. For the remaining percentages what it spits out needs so many changes it's easier to write it myself than expect that I tweaked it without mistakes. Sometimes it feels like CoPilot might slow me down since now I tend to hit a new line and wait 2-3 seconds to see what it suggests. u/Slypenslyde https://www.reddit.com/r/csharp/comments/1768d7o/comment/k4kguvf/?utm_source=share&utm_medium=web2x&context=3 I haven't found any in IDE plug-in that's been all that great. I've used copilot in conjunction with chatGPT and find myself using chatGPT way more than copilot. Keep in mind I use LLMs more as an enhanced search engine than a code writer. For code, I find it helpful to get a second opinion on a refactor, handing over error messages, writing one liners for some logic, and handing over a file to act as a second pair of eyes for what I can't see. Outside of code, I use it as a rubber ducky that can talk back when trying to think through some problems. Though tbh, the act of thinking about my problem and structuring it out to a prompt often solves my problem before I even hit send. Actually, now that I think about it. The damn thing has been a God send for writing and debugging terraform. u/telewebb https://www.reddit.com/r/csharp/comments/1768d7o/comment/k4kod5z/?utm_source=share&utm_medium=web2x&context=3 Call me old, but I prefer to code things myself. AI is good to give you hints and steer you in the right direction. It can also write a lot of bullshit that looks like legit code. Then, debugging code that you didn't write gets very difficult. Remember that you write code once, but will read it many, many times. Have your boss pay for training. u/quebecbassman https://www.reddit.com/r/csharp/comments/1768d7o/comment/k4kgylh/?utm_source=share&utm_medium=web2x&context=3
32 Java 30.55% 8 1,911,018 3,939,936 14,008,719 3,752,951 9,232,281 271.43 107.7 113.8 120.3 41 307k https://www.reddit.com/r/java/ Anyone who tried to use ChatGPT to solve some real-world programming issues knows, that even if you are able to replace 1-2 juniors with it, you will lose 1 senior to filter out the nonsense it can produce with full confidence. Not worth it. What's worse - I've seen many beginners treating AI as some form of oracle and believing everything it spits out even if it's all false. But AI is a powerful tool and it's worth checking it out and tracking its progress. Who knows what it will look like in a few years? u/ByerN https://www.reddit.com/r/java/comments/163eltc/comment/jy2asuq/?utm_source=share&utm_medium=web2x&context=3 I have to wonder if AI translation is determinate. I use Github Copilot fairly often, and it returns schizophrenic suggestions apparently at random. It also seems stuck in pre Java-8 for syntax (I've never seen it use switch expressions, and it rarely uses streams). u/benjtay https://www.reddit.com/r/java/comments/16lu4wb/comment/k14rnx3/?utm_source=share&utm_medium=web2x&context=3 I've been using GitHub Copilot with Android Studio for a couple of months. It's actually amazing. It doesn't produce a ton of suggestions, but the ones it does produce are right a lot of the time. Even the wrong ones are often pretty close and only need minor editing. It won't write full classes but it can write short methods or blocks of code. Highly recommend. u/BarryFruitman https://www.reddit.com/r/java/comments/176t5vb/comment/k4rwd2t/?utm_source=share&utm_medium=web2x&context=3
33 Bash N/A N/A N/A 32.37% 7 154,693 866,313 3,605,350 574,292 2,121,149 8.69 3.01 0 0 0 61.7k https://www.reddit.com/r/bash/ chatgpt is very bad at bash. Every script that someone has posted here has had some really glaring errors, often data-destructive ones. In general for every single use-case of chatgpt (or any other generative model) unless you understand the correct output you should not trust it. You can use it to produce documents and reports or even scripts, but you should always read the output carefully and validate that what it says is correct. u/[deleted] https://www.reddit.com/r/bash/comments/124h7gj/comment/jdzbtvp/?utm_source=share&utm_medium=web2x&context=3 I've tried getting it to write some code. Very little is useful. It still very much requires education and experience with the tools you use in order to get effective, clean, and efficient code. I had tried some python scripts, but you need to specify libraries and tools to be used, and it doesn't do that well. As it learns more, it may become better at this, but for now it's a neat toy without real world benefits u/RandomXUsr https://www.reddit.com/r/bash/comments/zix2am/comment/iztmsp3/?utm_source=share&utm_medium=web2x&context=3 This is more general advice for using chatGPT for generating bash scripts. chatGPT is a powerful tool, but it has both general and bash/linux related weaknesses. Never run script you don’t understand. That is a hard pill to shallow when learning bash, but thankfully you can ask chatGPT to explain its reasoning. To be sure, open a new conversation and ask for explanation of part of the code there. You can also ask another instance for a general explanation of a new syntax or command, and then cross-check the original code. After seeing what chatGPT knows about an individual command, it doesn’t hurt to quicklycheck the man-page anyway. ChatGPT is prone for using “general” syntax and flags even when some command doesn’t exist. Lastly, commands can change through years and environments. Your man-pages tell you what version you have. It’s a good strategy to ask if any tools already exist for the task or are build in, before asking for a bash script. For example you could script dropping your ssh-key in a remote machines .ssh-dir and then appending it to the trusted-keys file (or in folder) - or you can just use the ssh commands build in add-key option. There are a lot of tools build in to your average linux installation, and your distros repos are full of even more lightweight, trustworthy tools (as long as you stick to the official repos). If you aren’t exactly sure how a script behaves or if the syntax is robust, create your own test environments. You can create virtual (or real) directory structures, quickly fill them with very small files and run the script without touching your actual data. Ask chatGPT for more information (and use above steps to understand what it says). Related to the last point, pay attention to especially these aspects of any script chatGPT spews back: hardcoded paths (or less strictly, any path that isn’t declared as a variable on the start of the script). If instead of a robust test environment, you just use a directory with subdirectories, hardcoded paths can escape that environment, connections outside your machine/local network: While I feel it is unlikely that chatGPT will compromise your system by opening an unsafe connection to unsafe address, the risk is worth mitigating. What if the first guy who got that address noticed it’s not used, and bought it to distribute malware, hoping chatGPT offers it again? But more likely problem is that you can rapidly pull a lot of data from the internet. It just opens up more doors to make a mess, modifying files in /etc, or your bootloader. You can cause all kinds of damage, including permanently disabling rights to modify the files to fix it (misconfigured privileges), making your system unbootable (fstab, grub), and just generally messing up your system. Back it up before any changes, read the man-pages twice, make small tests (and remember you usually need to reload systemd or reboot before changes take effect) u/stepbroImstuck_in_SU https://www.reddit.com/r/bash/comments/123buum/comment/jduund7/?utm_source=share&utm_medium=web2x&context=3
34 TypeScript N/A 38.87% 6 224,865 2,043,216 4,224,408 1,455,167 2,941,085 131.46 24.59 24.9 0 9.2 115k https://www.reddit.com/r/typescript/ ChatGPT is great for common knowledge, but it just bullshits for more esoteric stuff. Case in point: & {}: This basically "seals" the type, making it impossible to add new properties to it. This is just pure nonsense as near as I can tell. A big red flag is how vague it is. What does "seals the type" mean? For that matter, what does it mean to "add new properties" to a type? I messed around with it a bit in a TypeScript Playground and I can find no behavior that remotely corresponds to this explanation from ChatGPT. u/delventhalz https://www.reddit.com/r/typescript/comments/17i01kj/comment/k6tvg8v/?utm_source=share&utm_medium=web2x&context=3 As someone also somewhat new to typescript but very comfortable with javascript I know what you're going through. Something I've found to be super useful is asking chatGPT questions when something doesn't make sense to me. It usually provides a correct type and allows me to move on with what I'm trying to do instead of banging my head against the wall for 20 minutes. u/k3l2m1t https://www.reddit.com/r/typescript/comments/13h0n0h/comment/jk2yehs/?utm_source=share&utm_medium=web2x&context=3 I don't think copilot supports typescript more than any other language. It often gives me incorrect suggestions when it comes to typescript. Probably the only reason I might end up dropping it actually. . u/thinkmatt https://www.reddit.com/r/typescript/comments/pzmlvt/comment/hf4khk4/?utm_source=share&utm_medium=web2x&context=3
35 SQL N/A N/A N/A N/A 48.66% 5 667,216 123 1170 0 0 18.15 5.67 0 0 0 162k https://www.reddit.com/r/SQL I've used ChatGPT Plus, basically the paid version using GPT-4, and while it has helped suggest some new ways of querying stuff that I hadn't considered, it also just completely made things up. Even when I asked to clarify, like "are you sure that function actually exists?" it would apologize and then say the exact same wrong thing lol. There's no real bullshit filter for these LLMs. u/paymesucka https://www.reddit.com/r/SQL/comments/14e04k3/comment/josxeg3/?utm_source=share&utm_medium=web2x&context=3 I'm a DBA, 15 years. Chatgpt and other AIs are great up to about the skill level of a intern you'd hire as a jr. After that level of task... it takes more time and effort to vet it's output than it saves. I don't think it's a good tool for those learning, as they won't ever develop the skill to spot when and where the AI is wrong. I think there will be a wall of skill that will be impossible to climb for those who use it rather than working through problems on their own first. If you have the discipline to work the problem yourself and only use it if really stuck or to try an alternative, then it can be a nice assistant, like a personal intern that occasionally lies and tries to set you up for failure. u/Festernd https://www.reddit.com/r/SQL/comments/127zawr/comment/jeia6hv/?utm_source=share&utm_medium=web2x&context=3 Mostly to debug, but I change the table names for privacy reasons. Once in a while I'll ask it to write code from my plain English when I'm trying to solve a problem. I'll give it my broken code or some context first. u/feigndeaf https://www.reddit.com/r/SQL/comments/12oo0lm/comment/jgj204k/?utm_source=share&utm_medium=web2x&context=3
36 Python 49.28% 4 2,174,258 6,058,516 17,546,799 4,367,863 11,547,682 190.73 52.03 11.6 55.9 16 1.2m https://www.reddit.com/r/Python/ ChatGPT will make some programmers obsolete. Not because it can program better than them, but because one competent programmer that masters ChatGPT will be able to do the job of 2-3 of his colleagues in the same amount of time. u/Feb2020Acc https://www.reddit.com/r/Python/comments/10ytgkk/comment/j806l89/?utm_source=share&utm_medium=web2x&context=3 I've used it for ideas. Ask it for code for something I'm writing just to see what it suggests. But I don't just copy/paste the code into my project. The first rule of using ChatGPT for coding is, you should only be using ChatGPT for coding if you don't actually need to use ChatGPT for coding. Like, it's good for ideas because it's basically trained on Stackoverflow and the docs, and it's impossible to have heard of or remember every package, module, and function. But if you don't understand what it gives you and you just paste it in, you're not learning anything and are leaving yourself open to big problems. u/bamacgabhann https://www.reddit.com/r/Python/comments/12wsx2g/comment/jhgagc7/?utm_source=share&utm_medium=web2x&context=3 I had this exact same crisis of faith in Python about a year ago. The thing that really annoyed me was how much more effortful it was to create some of the features typed languages (especially C# with great interface support) had with a weaker guarantee. AI has fundamentally changed that for me. The SOTA LLMs can code best in python, can create type hints, documentation, and basic assertions/tests nearly free, and the localized hints about type give the AI great hints on how to code. If you accept AI as "augmented intelligence" then coding with python can be a very productive experience in 2023. u/marr75 https://www.reddit.com/r/Python/comments/15r05mq/comment/jw5yilu/?utm_source=share&utm_medium=web2x&context=3
37 CSS N/A N/A N/A N/A 52.97% 2 800,588 443,082 4,314,244 436,767 1,673,966 145.33 22.67 0 0 0 115k https://www.reddit.com/r/css I'm not sure how it could help learn. I spent a little while messing with it and trying to generate some html/css/js for a simple responsive hamburger menu. Results were mixed. It got me most of the way there, but had trouble really putting it all together into one menu that worked as intended. I could have spent more time trying to manipulate it, but that would've taken more time that it would have to make the thing by hand. On some level it's just google with extra steps since you need to check and verify everything it outputs. I found that Lucas from LTT had a good assessment of it: it's usually pretty good, but when it's wrong, it's confidently wrong. I think it would be a crappy teaching aid since the student doesn't immediately recognize when the bot is wrong or why the code it produced doesn't work. u/Kthulu666 https://www.reddit.com/r/css/comments/zudl9x/comment/j1ikchb/?utm_source=share&utm_medium=web2x&context=3 I use chatgpt daily and it works wonders, if you know what you're reading. Otherwise, if you don't know something as a complete beginner and take chatgpt response as gospel, you're gonna be in a world of hurt when it starts lying to you giving 3 year old outdated information.. u/ipromiseimnotakiller https://www.reddit.com/r/css/comments/17gcln8/comment/k6g1esr/?utm_source=share&utm_medium=web2x&context=3 In that case it's great. And I like ChatGPT too. But a complete beginner doesn't see possible flaws in the solution. So there is the possibility they learn a bad practice. I use ChatGPT too sometimes, but you will need to look at the code. Don't just copy and paste. u/cryothic https://www.reddit.com/r/css/comments/16owij3/comment/k1tjfqg/?utm_source=share&utm_medium=web2x&context=3
38 HTML N/A N/A N/A N/A 52.97% 2 1,183,299 1,140,227 7,284,841 786,699 2,055,453 746.33 118.12 0 0 0 46.5k https://www.reddit.com/r/HTML/ i actually used chatgpt to some extent but it doesn't help more than giving directions. i could come up with a fairly okay layout with objects and movement with chatgpt but it doesn't do much more than that u/cryothic https://www.reddit.com/r/css/comments/16owij3/comment/k1tjfqg/ ChatGPT is up to date as of 2021. That means that any information you get from it is already 2 years out of date. For fast moving languages like Golang, JavaScript, TypeScript, Rust, etc., that's too old. I've been able to make use of it because I have questions about setting up servers and how to refactor old Perl code, but other than that it's just not ready for primetime, yet, IMHO. u/russlo https://www.reddit.com/r/HTML/comments/11rb46v/comment/jc7yd4f/?utm_source=share&utm_medium=web2x&context=3 One thing about ChatGPT is that it names its IDs and classes very specifically and rarely uses element-level styles. In my experience, it will give an element a class even if it's the only one on the page. I'm not sure if this practice differs based on the version. u/steelfrog https://www.reddit.com/r/HTML/comments/17knwvb/comment/k7943pw/?utm_source=share&utm_medium=web2x&context=3
39 JavaScript 63.61% 1 2,518,260 6,390,411 22,397,798 6,753,636 23,751,668 486.2 87.82 88 24.7 22 2.4m https://www.reddit.com/r/javascript/ ChatGPT for faster and consise search results and thats all .Co Pilot isn't my cup of tea. u/Ok-Hospital-5076 https://www.reddit.com/r/javascript/comments/17o0p9o/comment/k7xhnws/?utm_source=share&utm_medium=web2x&context=3 i use chat gpt occasionally instead of google, it’s ok for some small specific functions but it just saves me 10 minutes here and there. i can definitely imagine my life without it since i often forget it exists u/andeee23 https://www.reddit.com/r/javascript/comments/17o0p9o/comment/k7ww57w/?utm_source=share&utm_medium=web2x&context=3 I use copilot for autocompletion and chatgpt as sort of a "documentation oracle". gpt4 gives "ok" code, but it where it really shines is asking it to explain something or write a simple implementation. u/alphabet_american https://www.reddit.com/r/javascript/comments/17o0p9o/comment/k7wvdjl/?utm_source=share&utm_medium=web2x&context=3

View File

@ -1,48 +0,0 @@
# Lisp
Lisp is the #34 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ Lisp is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ Lisp is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ Lisp is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Lisp is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Lisp is included in [The Stack dataset](https://arxiv.org/abs/2211.15533)
❌ Lisp is not included in the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ Lisp is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Lisp is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ Lisp is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Lisp has 6,945 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Lisp projects have had 8,431 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Lisp projects have had 12,870 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Lisp projects have had 73,903 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Lisp projects have had 47,157 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/KaranasToll](https://www.reddit.com/r/lisp/comments/138aovs/comment/jixfrkr/)
> Chat gpt is known to lie and be confident in its incorrectness. Also, try telling it to convert a program from lisp to python that uses advanced features like the condition system.
[u/friedrichRiemann](https://www.reddit.com/r/lisp/comments/11lwwv1/possible_effects_of_aiassisted_tools_on_lisps/?utm_source=share&utm_medium=web2x&context=3)
> How do you think the advent of ChatGPT and Copilot would affect the adoption and popularity of Common Lisp, Clojure and Schemes? On one hand, Large Language Models did not have access to these "niche" languages for training as much as the more popular alternatives like Python and Typescript so the quality of their output would be worse in comparison. On the other hand, the "interactive" aspect of LISP in that you code stuff, test in REPL and code again would not be so unique since the developer can just use the chat system to refine his solution. The other upside that LISPs had over the likes of Rust and C++ is the lack of syntax clutter and cleanness of s-expressions. In this front too, they would hurt from the likes of ChatGPT since the syntactic complexity is handled by the LLM not the developer.
[/u/Fine_Impression_3171](https://www.reddit.com/r/ChatGPT/comments/12o4k1n/looking_for_pretrained_gpt_with_lisp_autocad/)
> I'm an engineer working in the construction field, and I'm currently trying to create a Lisp routine for a project I'm working on. I've been trying to use GPT to generate the code, but I'm having some trouble getting it to work properly. I was wondering if anyone knows of a pre-trained GPT that has been specifically trained on Lisp code. I've been searching online, but I haven't had any luck so far. If anyone knows of a pre-trained GPT with Lisp, or has any tips for training my own GPT on Lisp code, I would really appreciate the help.

View File

@ -1,48 +0,0 @@
# Lua
Lua is the #18 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
✅ Lua is one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
✅ Lua is one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ Lua is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Lua is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Lua makes up 6.58 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ Lua makes up 2.81 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
✅ Lua makes up 2.9 GB of the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Lua is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ Lua is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Lua has 22,413 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Lua projects have had 139,939 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Lua projects have had 166,471 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Lua projects have had 717,566 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Lua projects have had 366,575 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/appgurueu](https://www.reddit.com/r/lua/comments/11dkwdl/comment/jacqn3z/?utm_source=share&utm_medium=web2x&context=3)
> First of all, don't use ChatGPT if you want to learn Lua. Refer to the well-written resources such as the "Programming in Lua" book instead.
[u/gluecat](https://www.reddit.com/r/lua/comments/12wj39f/comment/jhhg8qi/?utm_source=share&utm_medium=web2x&context=3)
> Ask chatGPT to convert java / concepts into language to Lua... works surprisingly well
[u/VitexHD](https://www.reddit.com/r/lua/comments/13tfqs2/comment/jlytud8/)
> Do you not find Copilot frustrating? I cannot stand it, it's the worst thing for me. Whenever I've actually needed help with something, it's either: Gave me absolute garbage code. Missed the point entirely. Maybe I'm just bad at giving it instructions, who knows 😅

View File

@ -1,48 +0,0 @@
# MATLAB
MATLAB is the #24 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ MATLAB is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ MATLAB is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ MATLAB is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ MATLAB is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ MATLAB is included in [The Stack dataset](https://arxiv.org/abs/2211.15533)
❌ MATLAB is not included in the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ MATLAB is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ MATLAB is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ MATLAB is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
MATLAB has 94,777 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
MATLAB projects have had 23,655 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
MATLAB projects have had 33,289 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
MATLAB projects have had 266,359 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
MATLAB projects have had 84,982 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/worblyhead](https://www.reddit.com/r/matlab/comments/12fwjx5/comment/jficv03/?utm_source=share&utm_medium=web2x&context=3)
> Yep, pretty much all the MATLAB code ChatGPT write for me worked. There was one instance whereby there was a multiplication that went away as it used * instead of .* To multiply two vectors. When I pointed that out, it corrected the code. In this case it was an order of operations issue and it correctly got it sorted by adjusting the parentheses. Pretty impressive so far.
[u/Latter_Trouble_3227](https://www.reddit.com/r/matlab/comments/y07uop/comment/jbgoj6h/?utm_source=share&utm_medium=web2x&context=3)
> Yes, you can use Co-Pilot with Matlab code. However, it won't work with the usual MATLAB IDE, so you have to use one of the supported IDEs (e.g. VS Code or JetBrains).
[u/LevelHelicopter9420](https://www.reddit.com/r/matlab/comments/12fwjx5/comment/jfll3tu/?utm_source=share&utm_medium=web2x&context=3)
> Why would you think such a simple plot with callback on click would not work? Now I wonder if it made the callback zoom-safe. I was using update callbacks after only 8 months of college experience with Matlab. And yet, I cant make chatGPT to give me the correct answer to a function inverse involving rational polynomials (at least the steps it got right, allowed me to remember how to do function inverses)

View File

@ -1,37 +0,0 @@
# Objective-C
Objective-C is the #31 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ Objective-C is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ Objective-C is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ Objective-C is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Objective-C is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Objective-C is included in [The Stack dataset](https://arxiv.org/abs/2211.15533)
❌ Objective-C is not included in the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ Objective-C is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Objective-C is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ Objective-C is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Objective-C has 292,409 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Objective-C projects have had 263,146 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Objective-C projects have had 397,275 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Objective-C projects have had 1,172,307 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Objective-C projects have had 3,003,177 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)

View File

@ -1,48 +0,0 @@
# Perl
Perl is the #29 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
✅ Perl is one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ Perl is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
✅ Perl is one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Perl is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Perl makes up 5.5 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ Perl makes up 4.7 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ Perl is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Perl is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ Perl is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Perl has 67,938 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Perl projects have had 125,129 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Perl projects have had 117,426 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Perl projects have had 634,214 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Perl projects have had 188,697 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/briandfoy](https://www.reddit.com/r/perl/comments/10j0k00/comment/j5ki948)
> There are a few problems with this, and I noticed the exact same thing with the GitHub Copilot. It's barfing out examples it was trained on with no idea about what they do, whether they work, and if they are current. Transaction objects no longer have a success method. This was deprecated for a long time ago and finally removed in version 9. The error method returns a single value. Minor problem, but still cruft that shouldn't be there. Call json on the response to get the data structure rather than doing this yourself. Even then, using JSON directly, while fine, skips over the Mojo::JSON::decode_json. It's a bit of a pain in the butt, but work hard to use the same parser everywhere in an application since they tend to have slight differences (say, like how they represent null, true, or false). Somewhere along the line, ChatGPT saw this code or something very similar. It then returns it to you with no intelligence about what it is doing or what it should do. It's very likely that the source ChatGPT saw is not only old, but also unsophisticated. You're likely just cargo-culting off StackOverflow with extra steps. But, this also isn't the way you probably want to write code. You don't want to return the token really, You want to add that to the user-agent so it provides it in every request without additional code from you. I have plenty of examples in Mojo Web Clients. That's another problem with the source material for these sorts of things: it's training itself off public data, but often our examples are mere demonstrations of ideas rather than advice on reliable software engineering (since we aren't going to write a book for every question someone asks).
[u/nobono](https://www.reddit.com/r/perl/comments/10j0k00/comment/j5l9s1c/)
> "Somewhere along the line, ChatGPT saw this code or something very similar. It then returns it to you with no intelligence about what it is doing or what it should do." IMO, this is quite irrelevant, because you must understand that whatever output - be it code, poems or whatever - from an AI-assisted service is not perfect. The main point is: it helps. And that's its main selling point today, because that's how StackOverflow also works: sometimes it's perfect, but most of the times it just helps, maybe because you have addressed the wrong audience, didn't word your question/problem correctly or otherwise. With ChatGPT you get an instant reply, and you can ask it to refine its reply. Instantly. Rinse and repeat. So if it use StackOverflow data (which I assume it does) it's already better in the sense that it's instant and filters out noise, especially personal attacks, or otherwise replies that intimidates the person asking the questions. "It then returns it to you with no intelligence about what it is doing or what it should do." Let's be honest, we have all been there and/or we have had colleagues who fits that description. :)
[u/its_a_gibibyte](https://www.reddit.com/r/perl/comments/14capfv/comment/jol2a4b)
> You mentioned being new to perl and programming. Personally, I think ChatGPT is a great resource for these types of question. I asked it your question and copied the function from csv2fasta.pl

View File

@ -1,48 +0,0 @@
# PHP
PHP is the #12 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
✅ PHP is one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
✅ PHP is one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
✅ PHP is one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ PHP is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ PHP makes up 183.19 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ PHP makes up 61.41 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
✅ PHP makes up 64 GB of the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ PHP is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
✅ PHP makes up 13 GB of the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
PHP has 1,462,608 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
PHP projects have had 2,550,461 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
PHP projects have had 2,286,391 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
PHP projects have had 9,196,172 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
PHP projects have had 4,036,079 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/mdizak](https://www.reddit.com/r/PHP/comments/13l0hgf/comment/jknw4z9/?utm_source=share&utm_medium=web2x&context=3)
> I've tried Chat GPT as I've seen some Youtube videos where people act in amazement while saying, "wow! This s amazing, I just give ChatGPT a class and it gives me all the unit tests for it within seconds! Total game changer!". Yeah, doesn't work worth a shit, at least not for me. It'd easier to just write the unit tests than refactor what Chat GPT gave me.
[u/mission_2525](https://www.reddit.com/r/PHP/comments/16yb0d9/comment/k3t6vwq/)
> I am under the impression that the update frequency of PHP libraries has gone down since ChatGPT was released. My interpretation is, that many companies and developers are looking deeply into the AI stuff. And that is not in favor of PHP so that the attention is moving away from PHP solutions (at least temporarily). Once the AI dust has settled we will see the real impact AI has on the PHP market. Anything else what might be relevant was already posted by other members here, so I won't go there.
[u/soowhatchathink](https://www.reddit.com/r/PHP/comments/14k6z6i/does_codesnifferecs_have_the_possibility_to/jq43c4z/?context=8&depth=9)
> Might not be the answer you're looking for but it probably wouldn't be hard to write your own PHPCS sniff for it. Edit: Here is ChatGPT going over how you'd write a sniff for it. I haven't tested it so you might need to modify it a little bit to get it working.

View File

@ -1,48 +0,0 @@
# PowerShell
PowerShell is the #13 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ PowerShell is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ PowerShell is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ PowerShell is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ PowerShell is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ PowerShell makes up 3.37 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ PowerShell makes up 0.69 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ PowerShell is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ PowerShell is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ PowerShell is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
PowerShell has 115,393 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
PowerShell projects have had 72,946 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
PowerShell projects have had 62,960 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
PowerShell projects have had 276,134 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
PowerShell projects have had 195,597 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/JesterOfSpades](https://www.reddit.com/r/PowerShell/comments/13h8ak1/comment/jk3o7v7/?utm_source=share&utm_medium=web2x&context=3)
> No, as of now LLM is Just another tool in the toolbox. It makes good coders more effective.
[u/lanerdofchristian](https://www.reddit.com/r/PowerShell/comments/171h3id/comment/k3s7ren/)
> ChatGPT is not a teaching tool. It isn't capable of understanding, so it cannot properly explain what it's doing. Anything it produces is suspect, because it isn't designed to produce working, clean, modern PowerShell code, it's designed to be a chatbot that puts words next to other words weighted by context clues.
[u/Eimee_Inkari](https://www.reddit.com/r/PowerShell/comments/14jy6n1/comment/jpq3yg9/?utm_source=share&utm_medium=web2x&context=3)
> I've had a mixed bag with copilot. Sometimes it has given pure gold that I didn't think about but other times it suggests super lazy things like += arrays instead of creating a non-fixed array and adding to it. OH the hands down biggest thing it has helped with is working with pester testing. Still learning about it but copilot has certainly helped a bunch.

View File

@ -1,48 +0,0 @@
# Python
Python is the #4 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
✅ Python is one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
✅ Python is one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
✅ Python is one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
✅ Python is one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Python makes up 190.73 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ Python makes up 52.03 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
✅ Python makes up 11.6 GB of the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
✅ Python makes up 55.9 GB of the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
✅ Python makes up 16 GB of the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Python has 2,174,258 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Python projects have had 6,058,516 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Python projects have had 4,367,863 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Python projects have had 17,546,799 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Python projects have had 11,547,682 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/Feb2020Acc](https://www.reddit.com/r/Python/comments/10ytgkk/comment/j806l89/?utm_source=share&utm_medium=web2x&context=3)
> ChatGPT will make some programmers obsolete. Not because it can program better than them, but because one competent programmer that masters ChatGPT will be able to do the job of 2-3 of his colleagues in the same amount of time.
[u/bamacgabhann](https://www.reddit.com/r/Python/comments/12wsx2g/comment/jhgagc7/?utm_source=share&utm_medium=web2x&context=3)
> I've used it for ideas. Ask it for code for something I'm writing just to see what it suggests. But I don't just copy/paste the code into my project. The first rule of using ChatGPT for coding is, you should only be using ChatGPT for coding if you don't actually need to use ChatGPT for coding. Like, it's good for ideas because it's basically trained on Stackoverflow and the docs, and it's impossible to have heard of or remember every package, module, and function. But if you don't understand what it gives you and you just paste it in, you're not learning anything and are leaving yourself open to big problems.
[u/marr75](https://www.reddit.com/r/Python/comments/15r05mq/comment/jw5yilu/?utm_source=share&utm_medium=web2x&context=3)
> I had this exact same crisis of faith in Python about a year ago. The thing that really annoyed me was how much more effortful it was to create some of the features typed languages (especially C# with great interface support) had with a weaker guarantee. AI has fundamentally changed that for me. The SOTA LLMs can code best in python, can create type hints, documentation, and basic assertions/tests nearly free, and the localized hints about type give the AI great hints on how to code. If you accept AI as "augmented intelligence" then coding with python can be a very productive experience in 2023.

View File

@ -1,48 +0,0 @@
# R
R is the #22 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
✅ R is one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
✅ R is one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ R is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ R is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ R is included in [The Stack dataset](https://arxiv.org/abs/2211.15533)
❌ R is not included in the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ R is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ R is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ R is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
R has 499,872 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
R projects have had 51,800 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
R projects have had 88,649 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
R projects have had 506,309 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
R projects have had 91,654 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/2truthsandalie](https://www.reddit.com/r/Rlanguage/comments/17q56xq/comment/k8b2phr/?utm_source=share&utm_medium=web2x&context=3)
> It's even helpful for example datasets. If you want to test or play around it will create a dataframe example. Also if you know one programming language it can help translate. It will even rewrite the code to look better. E.g. write this code in python pandas but make it more readable like r dplyr. Anything regex is nice as I don't have to hope a specific example is on stack overflow. Chat cpt from my experience will often favor going things with for loops instead of taking advantage of dplyr or pandas functions. With everything chat gpt tho check the code as it will confidently give you an answer and even print out a fake output. Often pointing out its error gets chatgpt to fix the code.
[u/DrLyndonWalker](https://www.reddit.com/r/Rlanguage/comments/17q56xq/comment/k8bi6nq/?utm_source=share&utm_medium=web2x&context=3)
> I have found it hit and miss. I was able to knock up simple Shiny apps in a minute (https://youtu.be/8oJ1HtkpDt0) but have had it write non-sense code for some other things I was trying (especially webscraping). GPT Studio is pretty good (demo here https://youtu.be/QQfDTLExoNU) but has someone else mentioned, take a look at Github Copilot
[u/jrdubbleu](https://www.reddit.com/r/Rlanguage/comments/17q56xq/comment/k89wmhi/?utm_source=share&utm_medium=web2x&context=3)
> I do it constantly, not only for debugging which it is spectacular at, but for especially tedious things like using ggplot. If you can think it, GPT-4 and the other specialized models can code it. The real key is to put thought into the question you want to answer with the code and then to very deliberately tell the GPT what to do. For example, “I have a data frame with x, y, z variables. Please write R code to perform a, b, c statistical analysis. Place the results into a variable called results.” And so on.

View File

@ -1,48 +0,0 @@
# Ruby
Ruby is the #17 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
✅ Ruby is one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ Ruby is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
✅ Ruby is one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Ruby is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Ruby makes up 23.82 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ Ruby makes up 10.95 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
✅ Ruby makes up 11.6 GB of the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Ruby is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
✅ Ruby makes up 4.1 GB of the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Ruby has 228,663 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Ruby projects have had 2,482,982 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Ruby projects have had 1,204,510 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Ruby projects have had 5,645,881 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Ruby projects have had 2,905,832 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/throwaway-aso2fb](https://www.reddit.com/r/ruby/comments/16y3bxq/comment/k36os5n/?utm_source=share&utm_medium=web2x&context=3)
> Note that the failure mode for ChatGPT is that it will gaslight and lie to you. If you don't give it enough context, or the method names are ambiguous, there's a potential for it to make up explanations that sound plausible, but are dangerously incorrect. I'd advise talking to your team about the things that confuse you germane to your codebase, and only using ChatGPT for general Ruby content.
[u/OlivarTheLagomorph](https://www.reddit.com/r/ruby/comments/zq847a/comment/j0yy2y8/?utm_source=share&utm_medium=web2x&context=3)
> Not using copilot for the controversy around it stealing source code. Manager gave me a license however to use tabnine at the moment. In...basic scaffolding code it helps me speed up a bit by generating the blocks for example to write specs quickly, providing about 75% of the structure needed to get the spec fleshed out, e.g faster let declarations and do blocks. But for writing actual code I'm fighting it more than its helping me, since it simply doesn't understand what I am trying to write. Documentation is....hit&miss depending on whether it gets the meaning behind the variable names.
[u/onionionion](https://www.reddit.com/r/ruby/comments/11usmxs/comment/jcqdd8q/?utm_source=share&utm_medium=web2x&context=3)
> I use Github copilot (which uses openai's codex) and occasionally throw some questions to ChatGPT. Currently I use it for Ruby and Kotlin. I committed to Copilot after trying it for five minutes. Total game changer. Time spent doing grunt work, writing repetitive tests etc, has dropped by 90% and I'm left with a lot more time to implement elegant solutions rather than throwing in quick fixes to meet tight deadlines. Sometimes it almost seems like it can read my mind. You still need to have the experience and expertise to ensure it hasn't missed the point - it doesn't always have the full context of the problems you're working on - but I would wholeheartedly recommend it to any developer as a way to increase productivity.

View File

@ -1,48 +0,0 @@
# Rust
Rust is the #15 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
✅ Rust is one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
✅ Rust is one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ Rust is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Rust is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Rust makes up 40.35 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ Rust makes up 2.68 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
✅ Rust makes up 2.8 GB of the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Rust is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
✅ Rust makes up 3.5 GB of the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Rust has 39,147 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Rust projects have had 400,875 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Rust projects have had 239,196 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Rust projects have had 947,751 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Rust projects have had 941,468 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/remontantcoprology](https://www.reddit.com/r/rust/comments/zgkuq6/comment/izi6p21/?utm_source=share&utm_medium=web2x&context=3)
> I think programming is heading the same way as translation - a machine can give you a first draft, but experience is needed to verify and fix the resulting code. In the case of translation, many tools exist that will translate text from one language to another, but the results may be slightly or wholly inaccurate: knowledge of both the source and target languages is needed to verify the result. The same is applies to code generation by GPT. The combination of a human and machine will probably give better results, faster. But unsupervised code generation in a general sense is still a way off.
[u/JuanAG](https://www.reddit.com/r/rust/comments/zgkuq6/comment/izhfvi3/?utm_source=share&utm_medium=web2x&context=3)
> The issue is that most of the time the code wont compile or have UB so... It could be blazingly fast to give you text but if need 5 or 10 minutes per try to check is doing what i want i prefer to do the code myself and then i am sure is doing what i want. In other langs like Python maybe but in complex langs like C++ or Rust is not as good because of it complexity, i havent tried but in Rust you cant make a buble sort loop without swap(i, j) and GPT could try the usual aproach of array[i] = array[j] which wont work at all
[u/AbleEstablishment155](https://www.reddit.com/r/rust/comments/16iz3fj/is_there_a_specific_llm_for_rust_coding/?utm_source=share&utm_medium=web2x&context=3)
> I searched the huggingface hub for some LLM to help Rust coding. But most of them just for python. does anyone knows some LLM for just for Rust. Or how to build one. thanks

View File

@ -1,48 +0,0 @@
# Scala
Scala is the #28 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
✅ Scala is one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ Scala is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
✅ Scala is one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Scala is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Scala makes up 14.87 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ Scala makes up 3.87 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
✅ Scala makes up 4.1 GB of the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Scala is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
✅ Scala makes up 1.8 GB of the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Scala has 111,969 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Scala projects have had 605,988 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Scala projects have had 271,184 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Scala projects have had 1,508,526 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Scala projects have had 540,327 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/markehammons](https://www.reddit.com/r/scala/comments/124ocqh/scala_and_chatgpt/)
> Today I decided to test it by asking how one would use Scala 3 macros to get the types of the inputs and outputs of a method. It had some decent suggestions to do that for someone that is new to macros, but a lot of its answer was false, suggesting people use something called QuotesContext, not recognizing properly what extension methods are available for the Symbol type, and worst of all, trying to splice Type values into an Expr. If they can manage to get chatgpt to actually tell the truth consistently (like saying "I don't know how to do that" rather than just lying) I think it will be a nice resource for discovering how to do stuff you don't currently know how to do. Sadly, it's still got a nasty habit of making stuff up.
[u/agilesteel](https://www.reddit.com/r/scala/comments/ovoc8n/github_copilot_for_scala_does_it_work/)
> Well...this is a very hold thread but I'm using the latest copilot for scala available today of this post. I mostly use the ZIO framework. I was skeptical at first but I'm finding the suggestions get smart quickly and it is generating a lot of code fragments pretty well. I'm not claiming I can live without it, but as of today, I'm thinking it works pretty well for my scenarios. I could easily see not wanting to code without in the near future. I think using a framework like ZIO makes it easier to generate code fragments because the ZIO framework has a fairly predictable surface area, but that's just a guess.
[u/k1v1uq](https://www.reddit.com/r/ChatGPTCoding/comments/zpunkt/comment/j25ftsr/?utm_source=share&utm_medium=web2x&context=3)
> I wanted to start a new Scala project based on Clean Architecture aka dependency inversion. So I asked for a basic example to demo the principles. There was a lot of pretty code but ultimately it had no idea what this was about. The code was bs.

View File

@ -1,48 +0,0 @@
# Solidity
Solidity is the #35 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ Solidity is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ Solidity is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ Solidity is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Solidity is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
❌ Solidity is not included in [The Stack dataset](https://arxiv.org/abs/2211.15533)
❌ Solidity is not included in the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ Solidity is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Solidity is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ Solidity is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Solidity has 6,669 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Solidity projects have had 0 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Solidity projects have had 0 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Solidity projects have had 0 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Solidity projects have had 350 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/Adrewmc](https://www.reddit.com/r/solidity/comments/142amjb/comment/jn48x8v/)
> ChatGPT is awful at smart contract, the data is years out of date, and it tend to override and make functions that are unnecessary. Even worse it overrides safe good functions for unsafe inefficient functions. Speaking of inefficiency it will seriously de-optimize optimized code, even when asked to gas optimize it.
[Lorenzo Sicilia](https://outlierventures.io/article/can-chatgpt-really-be-trusted-to-write-a-smart-contract-or-to-refactor-your-existing-solidity-code/)
> Despite the mixed results, ChatGPT, aka GPT-3.5, is a step forward in the direction of writing code with an AI assistant. I actually enjoyed doing these little experiments. However, compared to other experiments I did with JavaScript and other languages, a clear takeaway from my efforts is that when it comes to the Web3 space, GPT doesnt yet have enough accuracy. In fairness, there is far less available Solidity and Web3-related JavaScript code in the wild than there is general-purpose JavaScript code. Plus, the web3 industry is constantly changing, which makes the problems of ChatGPT relying on an old dataset much worse. . On the positive side, generating an ABI from Solidity is something it did well, which shows it can learn from the available snippets the general rules to create something new.
[u/thatdudeiknew](https://www.reddit.com/r/LocalLLaMA/comments/14qednx/comment/jqmq2t5/?utm_source=share&utm_medium=web2x&context=3)
> Can someone please make an open coder model trained on Solidity

View File

@ -1,48 +0,0 @@
# SQL
SQL is the #5 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ SQL is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ SQL is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ SQL is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ SQL is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ SQL makes up 18.15 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ SQL makes up 5.67 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ SQL is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ SQL is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ SQL is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
SQL has 667,216 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
SQL projects have had 123 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
SQL projects have had 0 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
SQL projects have had 1170 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
SQL projects have had 0 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/paymesucka](https://www.reddit.com/r/SQL/comments/14e04k3/comment/josxeg3/?utm_source=share&utm_medium=web2x&context=3)
> I've used ChatGPT Plus, basically the paid version using GPT-4, and while it has helped suggest some new ways of querying stuff that I hadn't considered, it also just completely made things up. Even when I asked to clarify, like "are you sure that function actually exists?" it would apologize and then say the exact same wrong thing lol. There's no real bullshit filter for these LLMs.
[u/Festernd](https://www.reddit.com/r/SQL/comments/127zawr/comment/jeia6hv/?utm_source=share&utm_medium=web2x&context=3)
> I'm a DBA, 15 years. Chatgpt and other AIs are great up to about the skill level of a intern you'd hire as a jr. After that level of task... it takes more time and effort to vet it's output than it saves. I don't think it's a good tool for those learning, as they won't ever develop the skill to spot when and where the AI is wrong. I think there will be a wall of skill that will be impossible to climb for those who use it rather than working through problems on their own first. If you have the discipline to work the problem yourself and only use it if really stuck or to try an alternative, then it can be a nice assistant, like a personal intern that occasionally lies and tries to set you up for failure.
[u/feigndeaf](https://www.reddit.com/r/SQL/comments/12oo0lm/comment/jgj204k/?utm_source=share&utm_medium=web2x&context=3)
> Mostly to debug, but I change the table names for privacy reasons. Once in a while I'll ask it to write code from my plain English when I'm trying to solve a problem. I'll give it my broken code or some context first.

View File

@ -1,48 +0,0 @@
# Swift
Swift is the #21 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
✅ Swift is one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ Swift is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
✅ Swift is one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ Swift is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ Swift is included in [The Stack dataset](https://arxiv.org/abs/2211.15533)
❌ Swift is not included in the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ Swift is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ Swift is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ Swift is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
Swift has 331,145 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
Swift projects have had 425,921 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
Swift projects have had 325,962 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
Swift projects have had 1,334,455 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
Swift projects have had 2,731,776 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/[deleted]](https://www.reddit.com/r/swift/comments/174vuyo/comment/k4eayl9/?utm_source=share&utm_medium=web2x&context=3)
> Just a general tip: even though it's a bit out of date, chatgpt will answer these questions much faster and sometimes more accurately than Reddit can. I've pretty much replaced Google with chatgpt and my productivity is up and stress is down. For questions about the newest SwiftUI stuff try Google Bard. The LLMs aren't perfect. There's still a place for Reddit and stack overflow, but I'd check with an LLM first.
[u/Zagerer](https://www.reddit.com/r/swift/comments/13929qe/comment/jj0pti9/)
> I've tried copilot with SwiftUI and it's good for auto generating some things like specific styles, but not so good for other parts. Sometimes it helps with unit tests,but others it gets stuck in a loop.
[u/AppleHitMyHead](https://www.reddit.com/r/swift/comments/1724gke/comment/k481769/?utm_source=share&utm_medium=web2x&context=3)
> Here is my journey coming from C++: Read through "A Swift Tour" and follow along in a Swift Playground. Many times, I feel, "Huh, this part is so much better than C++.", or "This is pretty much the same," I don't force myself to learn everything though, for example, I skipped protocol entirely. This process took me a few hours. As I dug into SwiftUI, I ran into syntax I didn't understand. Instead of looking up the official document, I just Google or ChatGPT it. When I start doing things in a C++ way that I always hate, I often pause and search if Swift does it better. Oftentimes times, Swift does do it better! Still, I carry some baggage from C++ and later notice if I had done it differently, I would have saved myself a lot of trouble (for example, really thinking about whether things can be null or not). Don't be afraid of re-writing; it is part of the process. Today, I am still learning; however, I started to catch myself speaking in C++ "accent" using Swift, and oftentimes, I can Google/ChatGPT my way out of it.

View File

@ -1,48 +0,0 @@
# TypeScript
TypeScript is the #6 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
✅ TypeScript is one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
✅ TypeScript is one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
✅ TypeScript is one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ TypeScript is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ TypeScript makes up 131.46 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ TypeScript makes up 24.59 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
✅ TypeScript makes up 24.9 GB of the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ TypeScript is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
✅ TypeScript makes up 9.2 GB of the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
TypeScript has 224,865 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
TypeScript projects have had 2,043,216 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
TypeScript projects have had 1,455,167 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
TypeScript projects have had 4,224,408 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
TypeScript projects have had 2,941,085 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/delventhalz](https://www.reddit.com/r/typescript/comments/17i01kj/comment/k6tvg8v/?utm_source=share&utm_medium=web2x&context=3)
> ChatGPT is great for common knowledge, but it just bullshits for more esoteric stuff. Case in point: & {}: This basically "seals" the type, making it impossible to add new properties to it. This is just pure nonsense as near as I can tell. A big red flag is how vague it is. What does "seals the type" mean? For that matter, what does it mean to "add new properties" to a type? I messed around with it a bit in a TypeScript Playground and I can find no behavior that remotely corresponds to this explanation from ChatGPT.
[u/k3l2m1t](https://www.reddit.com/r/typescript/comments/13h0n0h/comment/jk2yehs/?utm_source=share&utm_medium=web2x&context=3)
> As someone also somewhat new to typescript but very comfortable with javascript I know what you're going through. Something I've found to be super useful is asking chatGPT questions when something doesn't make sense to me. It usually provides a correct type and allows me to move on with what I'm trying to do instead of banging my head against the wall for 20 minutes.
[u/thinkmatt](https://www.reddit.com/r/typescript/comments/pzmlvt/comment/hf4khk4/?utm_source=share&utm_medium=web2x&context=3)
> I don't think copilot supports typescript more than any other language. It often gives me incorrect suggestions when it comes to typescript. Probably the only reason I might end up dropping it actually. .

View File

@ -1,48 +0,0 @@
# VB.NET
VB.NET is the #23 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ VB.NET is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ VB.NET is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ VB.NET is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ VB.NET is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ VB.NET is included in [The Stack dataset](https://arxiv.org/abs/2211.15533)
❌ VB.NET is not included in the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ VB.NET is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ VB.NET is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ VB.NET is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
VB.NET has 335,092 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
VB.NET projects have had 15,653 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
VB.NET projects have had 2,915 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
VB.NET projects have had 35,848 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
VB.NET projects have had 0 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/KenBonny](https://www.reddit.com/r/dotnet/comments/16j8il5/comment/k0qjb6u/?utm_source=share&utm_medium=web2x&context=3)
> What I've seen from gpt and copilot is that it's a good junior and sparring partner, but it's no substitute for a senior. It lacks reasoning and analytical capabilities to be a true senior. For example, it can tell you the difference between mediator and nservicebus (dotnet environment), but it cannot explain which one you should use for the project you are working on.
[u/Ch33kyMnk3y](https://www.reddit.com/r/dotnet/comments/10s8eld/comment/j704bu4/?utm_source=share&utm_medium=web2x&context=3)
> I've been using it for a LOT of utility classes, regex expressions, and things like that. It's nowhere near replacing my job yet but it's saved me countless hours on some rather trivial but tedious tasks. Most recent today was a function that converts a string to camel case, worked perfectly right out of the gate. Yea I probably could have found the same function on google in 10 min, but I would have had to comb through ads, and useless posts on stack overflow, before I found one I knew would be performant. It's not laziness, the rest of my job is busy enough, I could have spent an hour or two figuring out the logic from scratch but simply put, this is a far more efficient use of my time.
[u/ModernTenshi04](https://www.reddit.com/r/dotnet/comments/15od4zx/comment/jvr5vur/?utm_source=share&utm_medium=web2x&context=3)
> Yeah, I just use the free version but I'll ask it to do something, it kinda does it, I ask, "Is this part necessary?" It then responds with oh you're right and redoes it but in a way that still has questions, like I wanted it to explain why it did something the way it did and it takes that as I'm saying it's not really needed. Then I ask it to explain the new changes and it reverts things to the way it did them before thinking I spotted an error in how it redid the code. 🤦‍♂️ I still think it's a nice option to springboard learning or get quick explanations of things with examples, but the more I've used it the less I'm convinced it'll be stealing my job anytime soon. What I actually fear more are engineers and/or middle managers who don't know any better trusting everything it suggests who then think this makes engineers less needed or useful.

View File

@ -1,48 +0,0 @@
# VBA
VBA is the #25 most popular language according to the [2023 Stack Overflow Developer Survey](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages).
## Benchmarks
❌ VBA is not one of the 19 languages in the [MultiPL-E benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=couple%20notable%20mentions-,4.%20MultiPL%2DE,-Creator%3A%20Northeastern)
❌ VBA is not one of the 16 languages in the [BabelCode / TP3 benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=amazon%2Dscience/mxeval-,12.%20BabelCode%20/%20TP3,-Creator%3A%20Google)
❌ VBA is not one of the 13 languages in the [MBXP / Multilingual HumanEval benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=11.%20MBXP%20/%20Multilingual%20HumanEval)
❌ VBA is not one of the 5 languages in the [HumanEval-X benchmark](https://blog.continue.dev/an-introduction-to-code-llm-benchmarks-for-software-engineers/#:~:text=Some%20multilingual%C2%A0benchmarks-,10.%20HumanEval%2DX,-Creator%3A%20Tsinghua)
## Datasets
✅ VBA makes up 2.73 GB of [The Stack dataset](https://arxiv.org/abs/2211.15533)
✅ VBA makes up 1.91 GB of the [CodeParrot dataset](https://huggingface.co/datasets/codeparrot/github-code)
❌ VBA is not included in the [AlphaCode dataset](https://arxiv.org/abs/2203.07814)
❌ VBA is not included in the [CodeGen dataset](https://arxiv.org/abs/2203.13474)
❌ VBA is not included in the [PolyCoder dataset](https://arxiv.org/abs/2202.13169)
## Stack Overflow & GitHub presence
VBA has 212,313 [tagged questions on Stack Overflow](https://stackoverflow.com/tags)
VBA projects have had 22,482 [PRs on GitHub since 2014](https://madnight.github.io/githut/#/pull_requests/2023/3)
VBA projects have had 17,439 [issues on GitHub since 2014](https://madnight.github.io/githut/#/issues/2023/3)
VBA projects have had 77,915 [pushes on GitHub since 2014](https://madnight.github.io/githut/#/pushes/2023/3)
VBA projects have had 19,273 [stars on GitHub since 2014](https://madnight.github.io/githut/#/stars/2023/3)
## Anecdotes from developers
[u/imartnm](https://www.reddit.com/r/vba/comments/108zy8k/comment/j3zcukr/?utm_source=share&utm_medium=web2x&context=3)
> It depends on how you use ChatGPT though. I started a VBA project using methods I had used in the past. When that didnt work, I tried the Google approach, and still couldnt do what I wanted. Then, I remembered that ChatGPT does code, and decided to give it a shot. Honestly, what it gave me was riddled with errors, but I went through error by error and forced the AI to come up with corrections. I would copy-past the code into the prompt and ask it to identify potential errors and explain how they could be fixed. I got a really intimate understanding of the code, the reasons for the errors, and the strategies for correcting them. Even then, the code was flawed and ultimately failed. But I was able to use some of what I picked up throughout the process to build my own foundation for the code that would eventually work and used the AI to help fill in the blanks. I got a lot out of the experience. Its very important to ask very specific questions and to make sure that you understand the recommendations that it makes so you dont get lost in later steps. I used Google to supplement some of the information the AI gave me to improve my understanding. I spent a lot of time with this thing, and I think we both came out of it just a little better at what we do.
[u/Confuciusz](https://www.reddit.com/r/vba/comments/108zy8k/comment/j3wn54u/?utm_source=share&utm_medium=web2x&context=3)
> I've tried using it for VBA/Power Query code, but it's spotty at the best of times. It sometimes will reference functions that don't exist, or will ignore the carefully worded instructions you give it. At its current state it's most useful as a glorified google /stackoverflow search. It can also be helpful while debugging or just to throw some suggestions your way. Writing out the basic structure of my module and asking for recommendations/alternatives to certain implementations is fun and has taught me some new tricks. So it's cool, but not really reliable. Don't let it write your code for you or you might risk spending more time fixing it than you would have just writing it. I'd say it's VBA capabilities are better than its grasp on PowerQuery (M) .
[u/E_Man91](https://www.reddit.com/r/vba/comments/123zuo6/comment/je3ixwy/?utm_source=share&utm_medium=web2x&context=3)
> Lol I just made a comment on another similar post where OP said GPT was incredible for Excel 😂 But yeah, GPT is still awful for VBA or long formulas. I tried giving clear instructions for simple tasks that it couldnt get right. Its cool, but long way to go

View File

@ -1,17 +0,0 @@
---
title: Quickstart
description: Getting started with Continue
keywords: [quickstart, start, install, vscode, jetbrains]
---
# ⚡️ Quickstart
1. Click `Install` on the **[Continue extension in the Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=Continue.continue)**
2. This will open the Continue extension page in VS Code, where you will need to click `Install` again
3. Once you do this, you will see the Continue logo show up on the left side bar. If you click it, then the Continue extension will then open up:
![vscode-install](/img/continue-screenshot.png)
4. If you have any problems, see the [troubleshooting guide](./troubleshooting.md) or ask for help in [our Discord](https://discord.gg/NWtdYexhMs).

View File

@ -1,21 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# DiffContextProvider
Type '@diff' to reference all of the changes you've made to your current branch. This is useful if you want to summarize what you've done or ask for a general review of your work before committing.
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/plugins/context_providers/diff.py)
## Properties
<ClassPropertyRef name='workspace_dir' details='{&quot;title&quot;: &quot;Workspace Dir&quot;, &quot;description&quot;: &quot;The workspace directory in which to run `git diff`&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
### Inherited Properties
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;default&quot;: &quot;diff&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="diff"/>
<ClassPropertyRef name='ide' details='{&quot;title&quot;: &quot;Ide&quot;}' required={false} default=""/>
<ClassPropertyRef name='display_title' details='{&quot;title&quot;: &quot;Display Title&quot;, &quot;default&quot;: &quot;Diff&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="Diff"/>
<ClassPropertyRef name='description' details='{&quot;title&quot;: &quot;Description&quot;, &quot;default&quot;: &quot;Output of &#x27;git diff&#x27; in current repo&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="Output of &#x27;git diff&#x27; in current repo"/>
<ClassPropertyRef name='dynamic' details='{&quot;title&quot;: &quot;Dynamic&quot;, &quot;default&quot;: true, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="True"/>
<ClassPropertyRef name='requires_query' details='{&quot;title&quot;: &quot;Requires Query&quot;, &quot;description&quot;: &quot;Indicates whether the ContextProvider requires a query. For example, the SearchContextProvider requires you to type &#x27;@search &lt;STRING_TO_SEARCH&gt;&#x27;. This will change the behavior of the UI so that it can indicate the expectation for a query.&quot;, &quot;default&quot;: false, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="False"/>

View File

@ -1,20 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# FileContextProvider
The FileContextProvider is a ContextProvider that allows you to search files in the open workspace.
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/plugins/context_providers/file.py)
## Properties
### Inherited Properties
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;default&quot;: &quot;file&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="file"/>
<ClassPropertyRef name='ide' details='{&quot;title&quot;: &quot;Ide&quot;}' required={false} default=""/>
<ClassPropertyRef name='display_title' details='{&quot;title&quot;: &quot;Display Title&quot;, &quot;default&quot;: &quot;Files&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="Files"/>
<ClassPropertyRef name='description' details='{&quot;title&quot;: &quot;Description&quot;, &quot;default&quot;: &quot;Reference files in the current workspace&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="Reference files in the current workspace"/>
<ClassPropertyRef name='dynamic' details='{&quot;title&quot;: &quot;Dynamic&quot;, &quot;default&quot;: false, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="False"/>
<ClassPropertyRef name='requires_query' details='{&quot;title&quot;: &quot;Requires Query&quot;, &quot;description&quot;: &quot;Indicates whether the ContextProvider requires a query. For example, the SearchContextProvider requires you to type &#x27;@search &lt;STRING_TO_SEARCH&gt;&#x27;. This will change the behavior of the UI so that it can indicate the expectation for a query.&quot;, &quot;default&quot;: false, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="False"/>

View File

@ -1,21 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# FileTreeContextProvider
Type '@tree' to reference the contents of your current workspace. The LLM will be able to see the nested directory structure of your project.
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/plugins/context_providers/filetree.py)
## Properties
<ClassPropertyRef name='workspace_dir' details='{&quot;title&quot;: &quot;Workspace Dir&quot;, &quot;description&quot;: &quot;The workspace directory to display&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
### Inherited Properties
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;default&quot;: &quot;tree&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="tree"/>
<ClassPropertyRef name='ide' details='{&quot;title&quot;: &quot;Ide&quot;}' required={false} default=""/>
<ClassPropertyRef name='display_title' details='{&quot;title&quot;: &quot;Display Title&quot;, &quot;default&quot;: &quot;File Tree&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="File Tree"/>
<ClassPropertyRef name='description' details='{&quot;title&quot;: &quot;Description&quot;, &quot;default&quot;: &quot;Add a formatted file tree of this directory to the context&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="Add a formatted file tree of this directory to the context"/>
<ClassPropertyRef name='dynamic' details='{&quot;title&quot;: &quot;Dynamic&quot;, &quot;default&quot;: true, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="True"/>
<ClassPropertyRef name='requires_query' details='{&quot;title&quot;: &quot;Requires Query&quot;, &quot;description&quot;: &quot;Indicates whether the ContextProvider requires a query. For example, the SearchContextProvider requires you to type &#x27;@search &lt;STRING_TO_SEARCH&gt;&#x27;. This will change the behavior of the UI so that it can indicate the expectation for a query.&quot;, &quot;default&quot;: false, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="False"/>

View File

@ -1,22 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# GitHubIssuesContextProvider
The GitHubIssuesContextProvider is a ContextProvider that allows you to search GitHub issues in a repo. Type '@issue' to reference the title and contents of an issue.
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/plugins/context_providers/github.py)
## Properties
<ClassPropertyRef name='repo_name' details='{&quot;title&quot;: &quot;Repo Name&quot;, &quot;description&quot;: &quot;The name of the GitHub repo from which to pull issues&quot;, &quot;type&quot;: &quot;string&quot;}' required={true} default=""/>
<ClassPropertyRef name='auth_token' details='{&quot;title&quot;: &quot;Auth Token&quot;, &quot;description&quot;: &quot;The GitHub auth token to use to authenticate with the GitHub API&quot;, &quot;type&quot;: &quot;string&quot;}' required={true} default=""/>
### Inherited Properties
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;default&quot;: &quot;issues&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="issues"/>
<ClassPropertyRef name='ide' details='{&quot;title&quot;: &quot;Ide&quot;}' required={false} default=""/>
<ClassPropertyRef name='display_title' details='{&quot;title&quot;: &quot;Display Title&quot;, &quot;default&quot;: &quot;GitHub Issues&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="GitHub Issues"/>
<ClassPropertyRef name='description' details='{&quot;title&quot;: &quot;Description&quot;, &quot;default&quot;: &quot;Reference GitHub issues&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="Reference GitHub issues"/>
<ClassPropertyRef name='dynamic' details='{&quot;title&quot;: &quot;Dynamic&quot;, &quot;default&quot;: false, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="False"/>
<ClassPropertyRef name='requires_query' details='{&quot;title&quot;: &quot;Requires Query&quot;, &quot;description&quot;: &quot;Indicates whether the ContextProvider requires a query. For example, the SearchContextProvider requires you to type &#x27;@search &lt;STRING_TO_SEARCH&gt;&#x27;. This will change the behavior of the UI so that it can indicate the expectation for a query.&quot;, &quot;default&quot;: false, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="False"/>

View File

@ -1,21 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# GoogleContextProvider
Type '@google' to reference the results of a Google search. For example, type "@google python tutorial" if you want to search and discuss ways of learning Python.
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/plugins/context_providers/google.py)
## Properties
<ClassPropertyRef name='serper_api_key' details='{&quot;title&quot;: &quot;Serper Api Key&quot;, &quot;description&quot;: &quot;Your SerpAPI key, used to programmatically make Google searches. You can get a key at https://serper.dev.&quot;, &quot;type&quot;: &quot;string&quot;}' required={true} default=""/>
### Inherited Properties
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;default&quot;: &quot;google&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="google"/>
<ClassPropertyRef name='ide' details='{&quot;title&quot;: &quot;Ide&quot;}' required={false} default=""/>
<ClassPropertyRef name='display_title' details='{&quot;title&quot;: &quot;Display Title&quot;, &quot;default&quot;: &quot;Google&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="Google"/>
<ClassPropertyRef name='description' details='{&quot;title&quot;: &quot;Description&quot;, &quot;default&quot;: &quot;Search Google&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="Search Google"/>
<ClassPropertyRef name='dynamic' details='{&quot;title&quot;: &quot;Dynamic&quot;, &quot;default&quot;: true, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="True"/>
<ClassPropertyRef name='requires_query' details='{&quot;title&quot;: &quot;Requires Query&quot;, &quot;default&quot;: true, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="True"/>

View File

@ -1,21 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# SearchContextProvider
Type '@search' to reference the results of codebase search, just like the results you would get from VS Code search.
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/plugins/context_providers/search.py)
## Properties
<ClassPropertyRef name='workspace_dir' details='{&quot;title&quot;: &quot;Workspace Dir&quot;, &quot;description&quot;: &quot;The workspace directory to search&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
### Inherited Properties
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;default&quot;: &quot;search&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="search"/>
<ClassPropertyRef name='ide' details='{&quot;title&quot;: &quot;Ide&quot;}' required={false} default=""/>
<ClassPropertyRef name='display_title' details='{&quot;title&quot;: &quot;Display Title&quot;, &quot;default&quot;: &quot;Search&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="Search"/>
<ClassPropertyRef name='description' details='{&quot;title&quot;: &quot;Description&quot;, &quot;default&quot;: &quot;Search workspace for exact matches&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="Search workspace for exact matches"/>
<ClassPropertyRef name='dynamic' details='{&quot;title&quot;: &quot;Dynamic&quot;, &quot;default&quot;: true, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="True"/>
<ClassPropertyRef name='requires_query' details='{&quot;title&quot;: &quot;Requires Query&quot;, &quot;default&quot;: true, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="True"/>

View File

@ -1,21 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# TerminalContextProvider
Type '@terminal' to reference the contents of your IDE's terminal.
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/plugins/context_providers/terminal.py)
## Properties
<ClassPropertyRef name='get_last_n_commands' details='{&quot;title&quot;: &quot;Get Last N Commands&quot;, &quot;description&quot;: &quot;The number of previous commands to reference&quot;, &quot;default&quot;: 3, &quot;type&quot;: &quot;integer&quot;}' required={false} default="3"/>
### Inherited Properties
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;default&quot;: &quot;terminal&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="terminal"/>
<ClassPropertyRef name='ide' details='{&quot;title&quot;: &quot;Ide&quot;}' required={false} default=""/>
<ClassPropertyRef name='display_title' details='{&quot;title&quot;: &quot;Display Title&quot;, &quot;default&quot;: &quot;Terminal&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="Terminal"/>
<ClassPropertyRef name='description' details='{&quot;title&quot;: &quot;Description&quot;, &quot;default&quot;: &quot;Reference the contents of the terminal&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="Reference the contents of the terminal"/>
<ClassPropertyRef name='dynamic' details='{&quot;title&quot;: &quot;Dynamic&quot;, &quot;default&quot;: true, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="True"/>
<ClassPropertyRef name='requires_query' details='{&quot;title&quot;: &quot;Requires Query&quot;, &quot;description&quot;: &quot;Indicates whether the ContextProvider requires a query. For example, the SearchContextProvider requires you to type &#x27;@search &lt;STRING_TO_SEARCH&gt;&#x27;. This will change the behavior of the UI so that it can indicate the expectation for a query.&quot;, &quot;default&quot;: false, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="False"/>

View File

@ -1,22 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# URLContextProvider
Type '@url' to reference the contents of a URL. You can either reference preset URLs, or reference one dynamically by typing '@url https://example.com'. The text contents of the page will be fetched and used as context.
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/plugins/context_providers/url.py)
## Properties
<ClassPropertyRef name='preset_urls' details='{&quot;title&quot;: &quot;Preset Urls&quot;, &quot;description&quot;: &quot;A list of preset URLs that you will be able to quickly reference by typing &#x27;@url&#x27;&quot;, &quot;default&quot;: [], &quot;type&quot;: &quot;array&quot;, &quot;items&quot;: {&quot;type&quot;: &quot;string&quot;}}' required={false} default="[]"/>
<ClassPropertyRef name='static_url_context_items' details='{&quot;title&quot;: &quot;Static Url Context Items&quot;, &quot;default&quot;: [], &quot;type&quot;: &quot;array&quot;, &quot;items&quot;: {&quot;$ref&quot;: &quot;#/definitions/ContextItem&quot;}}' required={false} default="[]"/>
### Inherited Properties
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;default&quot;: &quot;url&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="url"/>
<ClassPropertyRef name='ide' details='{&quot;title&quot;: &quot;Ide&quot;}' required={false} default=""/>
<ClassPropertyRef name='display_title' details='{&quot;title&quot;: &quot;Display Title&quot;, &quot;default&quot;: &quot;URL&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="URL"/>
<ClassPropertyRef name='description' details='{&quot;title&quot;: &quot;Description&quot;, &quot;default&quot;: &quot;Reference the contents of a webpage&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="Reference the contents of a webpage"/>
<ClassPropertyRef name='dynamic' details='{&quot;title&quot;: &quot;Dynamic&quot;, &quot;default&quot;: true, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="True"/>
<ClassPropertyRef name='requires_query' details='{&quot;title&quot;: &quot;Requires Query&quot;, &quot;default&quot;: true, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="True"/>

View File

@ -1,37 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# AnthropicLLM
To setup Anthropic, add the following to your `config.json` file:
```json title="~/.continue/config.json"
{
"models": [{
"title": "Anthropic",
"provider": "anthropic",
"model": "claude-2",
"api_key": "YOUR_API_KEY"
}]
}
```
Claude 2 is not yet publicly released. You can request early access [here](https://www.anthropic.com/earlyaccess).
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/libs/llm/anthropic.py)
## Properties
### Inherited Properties
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;description&quot;: &quot;A title that will identify this model in the model selection dropdown&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='unique_id' details='{&quot;title&quot;: &quot;Unique Id&quot;, &quot;description&quot;: &quot;The unique ID of the user.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='model' details='{&quot;title&quot;: &quot;Model&quot;, &quot;description&quot;: &quot;The name of the model to be used (e.g. gpt-4, codellama)&quot;, &quot;default&quot;: &quot;claude-2&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="claude-2"/>
<ClassPropertyRef name='system_message' details='{&quot;title&quot;: &quot;System Message&quot;, &quot;description&quot;: &quot;A system message that will always be followed by the LLM&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='context_length' details='{&quot;title&quot;: &quot;Context Length&quot;, &quot;description&quot;: &quot;The maximum context length of the LLM in tokens, as counted by count_tokens.&quot;, &quot;default&quot;: 2048, &quot;type&quot;: &quot;integer&quot;}' required={false} default="2048"/>
<ClassPropertyRef name='completion_options' details='{&quot;title&quot;: &quot;Completion Options&quot;, &quot;description&quot;: &quot;Options for the completion endpoint. Read more about the completion options in the documentation.&quot;, &quot;default&quot;: {&quot;temperature&quot;: null, &quot;top_p&quot;: null, &quot;top_k&quot;: null, &quot;presence_penalty&quot;: null, &quot;frequency_penalty&quot;: null, &quot;stop&quot;: null, &quot;max_tokens&quot;: 600}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/BaseCompletionOptions&quot;}]}' required={false} default="{&#x27;temperature&#x27;: None, &#x27;top_p&#x27;: None, &#x27;top_k&#x27;: None, &#x27;presence_penalty&#x27;: None, &#x27;frequency_penalty&#x27;: None, &#x27;stop&#x27;: None, &#x27;max_tokens&#x27;: 600}"/>
<ClassPropertyRef name='request_options' details='{&quot;title&quot;: &quot;Request Options&quot;, &quot;description&quot;: &quot;Options for the HTTP request to the LLM.&quot;, &quot;default&quot;: {&quot;timeout&quot;: 300, &quot;verify_ssl&quot;: null, &quot;ca_bundle_path&quot;: null, &quot;proxy&quot;: null, &quot;headers&quot;: null}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/RequestOptions&quot;}]}' required={false} default="{&#x27;timeout&#x27;: 300, &#x27;verify_ssl&#x27;: None, &#x27;ca_bundle_path&#x27;: None, &#x27;proxy&#x27;: None, &#x27;headers&#x27;: None}"/>
<ClassPropertyRef name='prompt_templates' details='{&quot;title&quot;: &quot;Prompt Templates&quot;, &quot;description&quot;: &quot;A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \&quot;edit\&quot; key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.&quot;, &quot;default&quot;: {}, &quot;type&quot;: &quot;object&quot;}' required={false} default="{}"/>
<ClassPropertyRef name='api_key' details='{&quot;title&quot;: &quot;Api Key&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='api_base' details='{&quot;title&quot;: &quot;Api Base&quot;, &quot;description&quot;: &quot;The base URL of the LLM API.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>

View File

@ -1,40 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# GGML
See our [5 minute quickstart](https://github.com/continuedev/ggml-server-example) to run any model locally with ggml. While these models don't yet perform as well, they are free, entirely private, and run offline.
Once the model is running on localhost:8000, change `~/.continue/config.json` to look like this:
```json title="~/.continue/config.json"
{
"models": [{
"title": "GGML",
"provider": "openai-aiohttp",
"model": "MODEL_NAME",
"api_base": "http://localhost:8000"
}]
}
```
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/libs/llm/ggml.py)
## Properties
<ClassPropertyRef name='api_type' details='{&quot;title&quot;: &quot;Api Type&quot;, &quot;description&quot;: &quot;OpenAI API type.&quot;, &quot;enum&quot;: [&quot;azure&quot;, &quot;openai&quot;], &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='api_version' details='{&quot;title&quot;: &quot;Api Version&quot;, &quot;description&quot;: &quot;OpenAI API version. For use with Azure OpenAI Service.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='engine' details='{&quot;title&quot;: &quot;Engine&quot;, &quot;description&quot;: &quot;OpenAI engine. For use with Azure OpenAI Service.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
### Inherited Properties
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;description&quot;: &quot;A title that will identify this model in the model selection dropdown&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='unique_id' details='{&quot;title&quot;: &quot;Unique Id&quot;, &quot;description&quot;: &quot;The unique ID of the user.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='model' details='{&quot;title&quot;: &quot;Model&quot;, &quot;description&quot;: &quot;The name of the model to use (optional for the GGML class)&quot;, &quot;default&quot;: &quot;ggml&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="ggml"/>
<ClassPropertyRef name='system_message' details='{&quot;title&quot;: &quot;System Message&quot;, &quot;description&quot;: &quot;A system message that will always be followed by the LLM&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='context_length' details='{&quot;title&quot;: &quot;Context Length&quot;, &quot;description&quot;: &quot;The maximum context length of the LLM in tokens, as counted by count_tokens.&quot;, &quot;default&quot;: 2048, &quot;type&quot;: &quot;integer&quot;}' required={false} default="2048"/>
<ClassPropertyRef name='completion_options' details='{&quot;title&quot;: &quot;Completion Options&quot;, &quot;description&quot;: &quot;Options for the completion endpoint. Read more about the completion options in the documentation.&quot;, &quot;default&quot;: {&quot;temperature&quot;: null, &quot;top_p&quot;: null, &quot;top_k&quot;: null, &quot;presence_penalty&quot;: null, &quot;frequency_penalty&quot;: null, &quot;stop&quot;: null, &quot;max_tokens&quot;: 600}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/BaseCompletionOptions&quot;}]}' required={false} default="{&#x27;temperature&#x27;: None, &#x27;top_p&#x27;: None, &#x27;top_k&#x27;: None, &#x27;presence_penalty&#x27;: None, &#x27;frequency_penalty&#x27;: None, &#x27;stop&#x27;: None, &#x27;max_tokens&#x27;: 600}"/>
<ClassPropertyRef name='request_options' details='{&quot;title&quot;: &quot;Request Options&quot;, &quot;description&quot;: &quot;Options for the HTTP request to the LLM.&quot;, &quot;default&quot;: {&quot;timeout&quot;: 300, &quot;verify_ssl&quot;: null, &quot;ca_bundle_path&quot;: null, &quot;proxy&quot;: null, &quot;headers&quot;: null}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/RequestOptions&quot;}]}' required={false} default="{&#x27;timeout&#x27;: 300, &#x27;verify_ssl&#x27;: None, &#x27;ca_bundle_path&#x27;: None, &#x27;proxy&#x27;: None, &#x27;headers&#x27;: None}"/>
<ClassPropertyRef name='prompt_templates' details='{&quot;title&quot;: &quot;Prompt Templates&quot;, &quot;description&quot;: &quot;A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \&quot;edit\&quot; key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.&quot;, &quot;default&quot;: {}, &quot;type&quot;: &quot;object&quot;}' required={false} default="{}"/>
<ClassPropertyRef name='api_key' details='{&quot;title&quot;: &quot;Api Key&quot;, &quot;description&quot;: &quot;The API key for the LLM provider.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='api_base' details='{&quot;title&quot;: &quot;Api Base&quot;, &quot;description&quot;: &quot;URL of the OpenAI-compatible server where the model is being served&quot;, &quot;default&quot;: &quot;http://localhost:8000&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="http://localhost:8000"/>

View File

@ -1,35 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# GooglePaLMAPI
The Google PaLM API is currently in public preview, so production applications are not supported yet. However, you can [create an API key in Google MakerSuite](https://makersuite.google.com/u/2/app/apikey) and begin trying out the `chat-bison-001` model. Change `~/.continue/config.json` to look like this:
```json title="~/.continue/config.json"
{
"models": [{
"title": "Chat Bison",
"provider": "google-palm",
"model": "chat-bison-001",
"api_key": "YOUR_API_KEY"
}]
}
```
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/libs/llm/google_palm_api.py)
## Properties
### Inherited Properties
<ClassPropertyRef name='api_key' details='{&quot;title&quot;: &quot;Api Key&quot;, &quot;description&quot;: &quot;Google PaLM API key&quot;, &quot;type&quot;: &quot;string&quot;}' required={true} default=""/>
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;description&quot;: &quot;A title that will identify this model in the model selection dropdown&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='unique_id' details='{&quot;title&quot;: &quot;Unique Id&quot;, &quot;description&quot;: &quot;The unique ID of the user.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='model' details='{&quot;title&quot;: &quot;Model&quot;, &quot;description&quot;: &quot;The name of the model to be used (e.g. gpt-4, codellama)&quot;, &quot;default&quot;: &quot;chat-bison-001&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="chat-bison-001"/>
<ClassPropertyRef name='system_message' details='{&quot;title&quot;: &quot;System Message&quot;, &quot;description&quot;: &quot;A system message that will always be followed by the LLM&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='context_length' details='{&quot;title&quot;: &quot;Context Length&quot;, &quot;description&quot;: &quot;The maximum context length of the LLM in tokens, as counted by count_tokens.&quot;, &quot;default&quot;: 2048, &quot;type&quot;: &quot;integer&quot;}' required={false} default="2048"/>
<ClassPropertyRef name='completion_options' details='{&quot;title&quot;: &quot;Completion Options&quot;, &quot;description&quot;: &quot;Options for the completion endpoint. Read more about the completion options in the documentation.&quot;, &quot;default&quot;: {&quot;temperature&quot;: null, &quot;top_p&quot;: null, &quot;top_k&quot;: null, &quot;presence_penalty&quot;: null, &quot;frequency_penalty&quot;: null, &quot;stop&quot;: null, &quot;max_tokens&quot;: 600}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/BaseCompletionOptions&quot;}]}' required={false} default="{&#x27;temperature&#x27;: None, &#x27;top_p&#x27;: None, &#x27;top_k&#x27;: None, &#x27;presence_penalty&#x27;: None, &#x27;frequency_penalty&#x27;: None, &#x27;stop&#x27;: None, &#x27;max_tokens&#x27;: 600}"/>
<ClassPropertyRef name='request_options' details='{&quot;title&quot;: &quot;Request Options&quot;, &quot;description&quot;: &quot;Options for the HTTP request to the LLM.&quot;, &quot;default&quot;: {&quot;timeout&quot;: 300, &quot;verify_ssl&quot;: null, &quot;ca_bundle_path&quot;: null, &quot;proxy&quot;: null, &quot;headers&quot;: null}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/RequestOptions&quot;}]}' required={false} default="{&#x27;timeout&#x27;: 300, &#x27;verify_ssl&#x27;: None, &#x27;ca_bundle_path&#x27;: None, &#x27;proxy&#x27;: None, &#x27;headers&#x27;: None}"/>
<ClassPropertyRef name='prompt_templates' details='{&quot;title&quot;: &quot;Prompt Templates&quot;, &quot;description&quot;: &quot;A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \&quot;edit\&quot; key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.&quot;, &quot;default&quot;: {}, &quot;type&quot;: &quot;object&quot;}' required={false} default="{}"/>
<ClassPropertyRef name='api_base' details='{&quot;title&quot;: &quot;Api Base&quot;, &quot;description&quot;: &quot;The base URL of the LLM API.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>

View File

@ -1,36 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# HuggingFaceInferenceAPI
Hugging Face Inference API is a great option for newly released language models. Sign up for an account and add billing [here](https://huggingface.co/settings/billing), access the Inference Endpoints [here](https://ui.endpoints.huggingface.co), click on “New endpoint”, and fill out the form (e.g. select a model like [WizardCoder-Python-34B-V1.0](https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0)), and then deploy your model by clicking “Create Endpoint”. Change `~/.continue/config.json` to look like this:
```json title="~/.continue/config.json"
{
"models": [{
"title": "Hugging Face Inference API",
"provider": "huggingface-inference-api",
"model": "MODEL_NAME",
"api_key": "YOUR_HF_TOKEN",
"api_base": "INFERENCE_API_ENDPOINT_URL"
}]
}
```
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/libs/llm/hf_inference_api.py)
## Properties
### Inherited Properties
<ClassPropertyRef name='api_key' details='{&quot;title&quot;: &quot;Api Key&quot;, &quot;description&quot;: &quot;Your Hugging Face API token&quot;, &quot;type&quot;: &quot;string&quot;}' required={true} default=""/>
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;description&quot;: &quot;A title that will identify this model in the model selection dropdown&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='unique_id' details='{&quot;title&quot;: &quot;Unique Id&quot;, &quot;description&quot;: &quot;The unique ID of the user.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='model' details='{&quot;title&quot;: &quot;Model&quot;, &quot;description&quot;: &quot;The name of the model to use (optional for the HuggingFaceInferenceAPI class)&quot;, &quot;default&quot;: &quot;Hugging Face Inference API&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="Hugging Face Inference API"/>
<ClassPropertyRef name='system_message' details='{&quot;title&quot;: &quot;System Message&quot;, &quot;description&quot;: &quot;A system message that will always be followed by the LLM&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='context_length' details='{&quot;title&quot;: &quot;Context Length&quot;, &quot;description&quot;: &quot;The maximum context length of the LLM in tokens, as counted by count_tokens.&quot;, &quot;default&quot;: 2048, &quot;type&quot;: &quot;integer&quot;}' required={false} default="2048"/>
<ClassPropertyRef name='completion_options' details='{&quot;title&quot;: &quot;Completion Options&quot;, &quot;description&quot;: &quot;Options for the completion endpoint. Read more about the completion options in the documentation.&quot;, &quot;default&quot;: {&quot;temperature&quot;: null, &quot;top_p&quot;: null, &quot;top_k&quot;: null, &quot;presence_penalty&quot;: null, &quot;frequency_penalty&quot;: null, &quot;stop&quot;: null, &quot;max_tokens&quot;: 600}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/BaseCompletionOptions&quot;}]}' required={false} default="{&#x27;temperature&#x27;: None, &#x27;top_p&#x27;: None, &#x27;top_k&#x27;: None, &#x27;presence_penalty&#x27;: None, &#x27;frequency_penalty&#x27;: None, &#x27;stop&#x27;: None, &#x27;max_tokens&#x27;: 600}"/>
<ClassPropertyRef name='request_options' details='{&quot;title&quot;: &quot;Request Options&quot;, &quot;description&quot;: &quot;Options for the HTTP request to the LLM.&quot;, &quot;default&quot;: {&quot;timeout&quot;: 300, &quot;verify_ssl&quot;: null, &quot;ca_bundle_path&quot;: null, &quot;proxy&quot;: null, &quot;headers&quot;: null}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/RequestOptions&quot;}]}' required={false} default="{&#x27;timeout&#x27;: 300, &#x27;verify_ssl&#x27;: None, &#x27;ca_bundle_path&#x27;: None, &#x27;proxy&#x27;: None, &#x27;headers&#x27;: None}"/>
<ClassPropertyRef name='prompt_templates' details='{&quot;title&quot;: &quot;Prompt Templates&quot;, &quot;description&quot;: &quot;A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \&quot;edit\&quot; key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.&quot;, &quot;default&quot;: {}, &quot;type&quot;: &quot;object&quot;}' required={false} default="{}"/>
<ClassPropertyRef name='api_base' details='{&quot;title&quot;: &quot;Api Base&quot;, &quot;description&quot;: &quot;Your Hugging Face Inference API endpoint URL&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>

View File

@ -1,24 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# HuggingFaceTGI
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/libs/llm/hf_tgi.py)
## Properties
### Inherited Properties
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;description&quot;: &quot;A title that will identify this model in the model selection dropdown&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='unique_id' details='{&quot;title&quot;: &quot;Unique Id&quot;, &quot;description&quot;: &quot;The unique ID of the user.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='model' details='{&quot;title&quot;: &quot;Model&quot;, &quot;description&quot;: &quot;The name of the model to be used (e.g. gpt-4, codellama)&quot;, &quot;default&quot;: &quot;huggingface-tgi&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="huggingface-tgi"/>
<ClassPropertyRef name='system_message' details='{&quot;title&quot;: &quot;System Message&quot;, &quot;description&quot;: &quot;A system message that will always be followed by the LLM&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='context_length' details='{&quot;title&quot;: &quot;Context Length&quot;, &quot;description&quot;: &quot;The maximum context length of the LLM in tokens, as counted by count_tokens.&quot;, &quot;default&quot;: 2048, &quot;type&quot;: &quot;integer&quot;}' required={false} default="2048"/>
<ClassPropertyRef name='completion_options' details='{&quot;title&quot;: &quot;Completion Options&quot;, &quot;description&quot;: &quot;Options for the completion endpoint. Read more about the completion options in the documentation.&quot;, &quot;default&quot;: {&quot;temperature&quot;: null, &quot;top_p&quot;: null, &quot;top_k&quot;: null, &quot;presence_penalty&quot;: null, &quot;frequency_penalty&quot;: null, &quot;stop&quot;: null, &quot;max_tokens&quot;: 600}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/BaseCompletionOptions&quot;}]}' required={false} default="{&#x27;temperature&#x27;: None, &#x27;top_p&#x27;: None, &#x27;top_k&#x27;: None, &#x27;presence_penalty&#x27;: None, &#x27;frequency_penalty&#x27;: None, &#x27;stop&#x27;: None, &#x27;max_tokens&#x27;: 600}"/>
<ClassPropertyRef name='request_options' details='{&quot;title&quot;: &quot;Request Options&quot;, &quot;description&quot;: &quot;Options for the HTTP request to the LLM.&quot;, &quot;default&quot;: {&quot;timeout&quot;: 300, &quot;verify_ssl&quot;: null, &quot;ca_bundle_path&quot;: null, &quot;proxy&quot;: null, &quot;headers&quot;: null}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/RequestOptions&quot;}]}' required={false} default="{&#x27;timeout&#x27;: 300, &#x27;verify_ssl&#x27;: None, &#x27;ca_bundle_path&#x27;: None, &#x27;proxy&#x27;: None, &#x27;headers&#x27;: None}"/>
<ClassPropertyRef name='prompt_templates' details='{&quot;title&quot;: &quot;Prompt Templates&quot;, &quot;description&quot;: &quot;A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \&quot;edit\&quot; key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.&quot;, &quot;default&quot;: {}, &quot;type&quot;: &quot;object&quot;}' required={false} default="{}"/>
<ClassPropertyRef name='api_key' details='{&quot;title&quot;: &quot;Api Key&quot;, &quot;description&quot;: &quot;The API key for the LLM provider.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='api_base' details='{&quot;title&quot;: &quot;Api Base&quot;, &quot;description&quot;: &quot;URL of your TGI server&quot;, &quot;default&quot;: &quot;http://localhost:8080&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="http://localhost:8080"/>

View File

@ -1,42 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# LlamaCpp
Run the llama.cpp server binary to start the API server. If running on a remote server, be sure to set host to 0.0.0.0:
```shell
.\server.exe -c 4096 --host 0.0.0.0 -t 16 --mlock -m models\meta\llama\codellama-7b-instruct.Q8_0.gguf
```
After it's up and running, change `~/.continue/config.json` to look like this:
```json title="~/.continue/config.json"
{
"models": [{
"title": "Llama CPP",
"provider": "llama.cpp",
"model": "MODEL_NAME",
"api_base": "http://localhost:8080"
}]
}
```
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/libs/llm/llamacpp.py)
## Properties
<ClassPropertyRef name='llama_cpp_args' details='{&quot;title&quot;: &quot;Llama Cpp Args&quot;, &quot;description&quot;: &quot;A list of additional arguments to pass to llama.cpp. See [here](https://github.com/ggerganov/llama.cpp/tree/master/examples/server#api-endpoints) for the complete catalog of options.&quot;, &quot;default&quot;: {&quot;stop&quot;: [&quot;[INST]&quot;]}, &quot;type&quot;: &quot;object&quot;}' required={false} default="{&#x27;stop&#x27;: [&#x27;[INST]&#x27;]}"/>
### Inherited Properties
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;description&quot;: &quot;A title that will identify this model in the model selection dropdown&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='unique_id' details='{&quot;title&quot;: &quot;Unique Id&quot;, &quot;description&quot;: &quot;The unique ID of the user.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='model' details='{&quot;title&quot;: &quot;Model&quot;, &quot;description&quot;: &quot;The name of the model to be used (e.g. gpt-4, codellama)&quot;, &quot;default&quot;: &quot;llamacpp&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="llamacpp"/>
<ClassPropertyRef name='system_message' details='{&quot;title&quot;: &quot;System Message&quot;, &quot;description&quot;: &quot;A system message that will always be followed by the LLM&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='context_length' details='{&quot;title&quot;: &quot;Context Length&quot;, &quot;description&quot;: &quot;The maximum context length of the LLM in tokens, as counted by count_tokens.&quot;, &quot;default&quot;: 2048, &quot;type&quot;: &quot;integer&quot;}' required={false} default="2048"/>
<ClassPropertyRef name='completion_options' details='{&quot;title&quot;: &quot;Completion Options&quot;, &quot;description&quot;: &quot;Options for the completion endpoint. Read more about the completion options in the documentation.&quot;, &quot;default&quot;: {&quot;temperature&quot;: null, &quot;top_p&quot;: null, &quot;top_k&quot;: null, &quot;presence_penalty&quot;: null, &quot;frequency_penalty&quot;: null, &quot;stop&quot;: null, &quot;max_tokens&quot;: 600}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/BaseCompletionOptions&quot;}]}' required={false} default="{&#x27;temperature&#x27;: None, &#x27;top_p&#x27;: None, &#x27;top_k&#x27;: None, &#x27;presence_penalty&#x27;: None, &#x27;frequency_penalty&#x27;: None, &#x27;stop&#x27;: None, &#x27;max_tokens&#x27;: 600}"/>
<ClassPropertyRef name='request_options' details='{&quot;title&quot;: &quot;Request Options&quot;, &quot;description&quot;: &quot;Options for the HTTP request to the LLM.&quot;, &quot;default&quot;: {&quot;timeout&quot;: 300, &quot;verify_ssl&quot;: null, &quot;ca_bundle_path&quot;: null, &quot;proxy&quot;: null, &quot;headers&quot;: null}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/RequestOptions&quot;}]}' required={false} default="{&#x27;timeout&#x27;: 300, &#x27;verify_ssl&#x27;: None, &#x27;ca_bundle_path&#x27;: None, &#x27;proxy&#x27;: None, &#x27;headers&#x27;: None}"/>
<ClassPropertyRef name='prompt_templates' details='{&quot;title&quot;: &quot;Prompt Templates&quot;, &quot;description&quot;: &quot;A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \&quot;edit\&quot; key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.&quot;, &quot;default&quot;: {}, &quot;type&quot;: &quot;object&quot;}' required={false} default="{}"/>
<ClassPropertyRef name='api_key' details='{&quot;title&quot;: &quot;Api Key&quot;, &quot;description&quot;: &quot;The API key for the LLM provider.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='api_base' details='{&quot;title&quot;: &quot;Api Base&quot;, &quot;description&quot;: &quot;URL of the server&quot;, &quot;default&quot;: &quot;http://localhost:8080&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="http://localhost:8080"/>

View File

@ -1,34 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# Ollama
[Ollama](https://ollama.ai/) is an application for Mac and Linux that makes it easy to locally run open-source models, including Llama-2. Download the app from the website, and it will walk you through setup in a couple of minutes. You can also read more in their [README](https://github.com/jmorganca/ollama). Continue can then be configured to use the `Ollama` LLM class:
```json title="~/.continue/config.json"
{
"models": [{
"title": "Ollama",
"provider": "ollama",
"model": "llama2-7b",
}]
}
```
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/libs/llm/ollama.py)
## Properties
### Inherited Properties
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;description&quot;: &quot;A title that will identify this model in the model selection dropdown&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='unique_id' details='{&quot;title&quot;: &quot;Unique Id&quot;, &quot;description&quot;: &quot;The unique ID of the user.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='model' details='{&quot;title&quot;: &quot;Model&quot;, &quot;description&quot;: &quot;The name of the model to be used (e.g. gpt-4, codellama)&quot;, &quot;default&quot;: &quot;llama2&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="llama2"/>
<ClassPropertyRef name='system_message' details='{&quot;title&quot;: &quot;System Message&quot;, &quot;description&quot;: &quot;A system message that will always be followed by the LLM&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='context_length' details='{&quot;title&quot;: &quot;Context Length&quot;, &quot;description&quot;: &quot;The maximum context length of the LLM in tokens, as counted by count_tokens.&quot;, &quot;default&quot;: 2048, &quot;type&quot;: &quot;integer&quot;}' required={false} default="2048"/>
<ClassPropertyRef name='completion_options' details='{&quot;title&quot;: &quot;Completion Options&quot;, &quot;description&quot;: &quot;Options for the completion endpoint. Read more about the completion options in the documentation.&quot;, &quot;default&quot;: {&quot;temperature&quot;: null, &quot;top_p&quot;: null, &quot;top_k&quot;: null, &quot;presence_penalty&quot;: null, &quot;frequency_penalty&quot;: null, &quot;stop&quot;: null, &quot;max_tokens&quot;: 600}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/BaseCompletionOptions&quot;}]}' required={false} default="{&#x27;temperature&#x27;: None, &#x27;top_p&#x27;: None, &#x27;top_k&#x27;: None, &#x27;presence_penalty&#x27;: None, &#x27;frequency_penalty&#x27;: None, &#x27;stop&#x27;: None, &#x27;max_tokens&#x27;: 600}"/>
<ClassPropertyRef name='request_options' details='{&quot;title&quot;: &quot;Request Options&quot;, &quot;description&quot;: &quot;Options for the HTTP request to the LLM.&quot;, &quot;default&quot;: {&quot;timeout&quot;: 300, &quot;verify_ssl&quot;: null, &quot;ca_bundle_path&quot;: null, &quot;proxy&quot;: null, &quot;headers&quot;: null}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/RequestOptions&quot;}]}' required={false} default="{&#x27;timeout&#x27;: 300, &#x27;verify_ssl&#x27;: None, &#x27;ca_bundle_path&#x27;: None, &#x27;proxy&#x27;: None, &#x27;headers&#x27;: None}"/>
<ClassPropertyRef name='prompt_templates' details='{&quot;title&quot;: &quot;Prompt Templates&quot;, &quot;description&quot;: &quot;A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \&quot;edit\&quot; key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.&quot;, &quot;default&quot;: {}, &quot;type&quot;: &quot;object&quot;}' required={false} default="{}"/>
<ClassPropertyRef name='api_key' details='{&quot;title&quot;: &quot;Api Key&quot;, &quot;description&quot;: &quot;The API key for the LLM provider.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='api_base' details='{&quot;title&quot;: &quot;Api Base&quot;, &quot;description&quot;: &quot;URL of the Ollama server&quot;, &quot;default&quot;: &quot;http://localhost:11434&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="http://localhost:11434"/>

View File

@ -1,49 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# OpenAI
The OpenAI class can be used to access OpenAI models like gpt-4 and gpt-3.5-turbo.
If you are locally serving a model that uses an OpenAI-compatible server, you can simply change the `api_base` like this:
```json title="~/.continue/config.json"
{
"models": [{
"title": "OpenAI-compatible server",
"provider": "openai",
"model": "MODEL_NAME",
"api_key": "EMPTY",
"api_base": "http://localhost:8000"
}]
}
```
Options for serving models locally with an OpenAI-compatible server include:
- [text-gen-webui](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/openai#setup--installation)
- [FastChat](https://github.com/lm-sys/FastChat/blob/main/docs/openai_api.md)
- [LocalAI](https://localai.io/basics/getting_started/)
- [llama-cpp-python](https://github.com/abetlen/llama-cpp-python#web-server)
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/libs/llm/openai.py)
## Properties
<ClassPropertyRef name='api_type' details='{&quot;title&quot;: &quot;Api Type&quot;, &quot;description&quot;: &quot;OpenAI API type.&quot;, &quot;enum&quot;: [&quot;azure&quot;, &quot;openai&quot;], &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='api_version' details='{&quot;title&quot;: &quot;Api Version&quot;, &quot;description&quot;: &quot;OpenAI API version. For use with Azure OpenAI Service.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='engine' details='{&quot;title&quot;: &quot;Engine&quot;, &quot;description&quot;: &quot;OpenAI engine. For use with Azure OpenAI Service.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='use_legacy_completions_endpoint' details='{&quot;title&quot;: &quot;Use Legacy Completions Endpoint&quot;, &quot;description&quot;: &quot;Manually specify to use the legacy completions endpoint instead of chat completions.&quot;, &quot;default&quot;: false, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="False"/>
### Inherited Properties
<ClassPropertyRef name='model' details='{&quot;title&quot;: &quot;Model&quot;, &quot;description&quot;: &quot;The name of the model to be used (e.g. gpt-4, codellama)&quot;, &quot;type&quot;: &quot;string&quot;}' required={true} default=""/>
<ClassPropertyRef name='api_key' details='{&quot;title&quot;: &quot;Api Key&quot;, &quot;description&quot;: &quot;OpenAI API key&quot;, &quot;type&quot;: &quot;string&quot;}' required={true} default=""/>
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;description&quot;: &quot;A title that will identify this model in the model selection dropdown&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='unique_id' details='{&quot;title&quot;: &quot;Unique Id&quot;, &quot;description&quot;: &quot;The unique ID of the user.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='system_message' details='{&quot;title&quot;: &quot;System Message&quot;, &quot;description&quot;: &quot;A system message that will always be followed by the LLM&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='context_length' details='{&quot;title&quot;: &quot;Context Length&quot;, &quot;description&quot;: &quot;The maximum context length of the LLM in tokens, as counted by count_tokens.&quot;, &quot;default&quot;: 2048, &quot;type&quot;: &quot;integer&quot;}' required={false} default="2048"/>
<ClassPropertyRef name='completion_options' details='{&quot;title&quot;: &quot;Completion Options&quot;, &quot;description&quot;: &quot;Options for the completion endpoint. Read more about the completion options in the documentation.&quot;, &quot;default&quot;: {&quot;temperature&quot;: null, &quot;top_p&quot;: null, &quot;top_k&quot;: null, &quot;presence_penalty&quot;: null, &quot;frequency_penalty&quot;: null, &quot;stop&quot;: null, &quot;max_tokens&quot;: 600}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/BaseCompletionOptions&quot;}]}' required={false} default="{&#x27;temperature&#x27;: None, &#x27;top_p&#x27;: None, &#x27;top_k&#x27;: None, &#x27;presence_penalty&#x27;: None, &#x27;frequency_penalty&#x27;: None, &#x27;stop&#x27;: None, &#x27;max_tokens&#x27;: 600}"/>
<ClassPropertyRef name='request_options' details='{&quot;title&quot;: &quot;Request Options&quot;, &quot;description&quot;: &quot;Options for the HTTP request to the LLM.&quot;, &quot;default&quot;: {&quot;timeout&quot;: 300, &quot;verify_ssl&quot;: null, &quot;ca_bundle_path&quot;: null, &quot;proxy&quot;: null, &quot;headers&quot;: null}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/RequestOptions&quot;}]}' required={false} default="{&#x27;timeout&#x27;: 300, &#x27;verify_ssl&#x27;: None, &#x27;ca_bundle_path&#x27;: None, &#x27;proxy&#x27;: None, &#x27;headers&#x27;: None}"/>
<ClassPropertyRef name='prompt_templates' details='{&quot;title&quot;: &quot;Prompt Templates&quot;, &quot;description&quot;: &quot;A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \&quot;edit\&quot; key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.&quot;, &quot;default&quot;: {}, &quot;type&quot;: &quot;object&quot;}' required={false} default="{}"/>
<ClassPropertyRef name='api_base' details='{&quot;title&quot;: &quot;Api Base&quot;, &quot;description&quot;: &quot;OpenAI API base URL.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>

View File

@ -1,54 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# OpenAIFreeTrial
With the `OpenAIFreeTrial` `LLM`, new users can try out Continue with GPT-4 using a proxy server that securely makes calls to OpenAI using our API key. Continue should just work the first time you install the extension in VS Code.
Once you are using Continue regularly though, you will need to add an OpenAI API key that has access to GPT-4 by following these steps:
1. Copy your API key from https://platform.openai.com/account/api-keys
2. Open `~/.continue/config.json`. You can do this by using the '/config' command in Continue
3. Change the default LLMs to look like this:
```json title="~/.continue/config.json"
{
"models": [
{
"title": "GPT-4",
"provider": "openai",
"model": "gpt-4",
"api_key": "YOUR_API_KEY"
},
{
"title": "GPT-3.5-Turbo",
"provider": "openai",
"model": "gpt-3.5-turbo",
"api_key": "YOUR_API_KEY"
}
],
"model_roles": {
"default": "GPT-4",
"summarize": "GPT-3.5-Turbo"
}
}
```
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/libs/llm/openai_free_trial.py)
## Properties
<ClassPropertyRef name='llm' details='{&quot;$ref&quot;: &quot;#/definitions/LLM&quot;}' required={false} default=""/>
### Inherited Properties
<ClassPropertyRef name='model' details='{&quot;title&quot;: &quot;Model&quot;, &quot;description&quot;: &quot;The name of the model to be used (e.g. gpt-4, codellama)&quot;, &quot;type&quot;: &quot;string&quot;}' required={true} default=""/>
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;description&quot;: &quot;A title that will identify this model in the model selection dropdown&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='unique_id' details='{&quot;title&quot;: &quot;Unique Id&quot;, &quot;description&quot;: &quot;The unique ID of the user.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='system_message' details='{&quot;title&quot;: &quot;System Message&quot;, &quot;description&quot;: &quot;A system message that will always be followed by the LLM&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='context_length' details='{&quot;title&quot;: &quot;Context Length&quot;, &quot;description&quot;: &quot;The maximum context length of the LLM in tokens, as counted by count_tokens.&quot;, &quot;default&quot;: 2048, &quot;type&quot;: &quot;integer&quot;}' required={false} default="2048"/>
<ClassPropertyRef name='completion_options' details='{&quot;title&quot;: &quot;Completion Options&quot;, &quot;description&quot;: &quot;Options for the completion endpoint. Read more about the completion options in the documentation.&quot;, &quot;default&quot;: {&quot;temperature&quot;: null, &quot;top_p&quot;: null, &quot;top_k&quot;: null, &quot;presence_penalty&quot;: null, &quot;frequency_penalty&quot;: null, &quot;stop&quot;: null, &quot;max_tokens&quot;: 600}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/BaseCompletionOptions&quot;}]}' required={false} default="{&#x27;temperature&#x27;: None, &#x27;top_p&#x27;: None, &#x27;top_k&#x27;: None, &#x27;presence_penalty&#x27;: None, &#x27;frequency_penalty&#x27;: None, &#x27;stop&#x27;: None, &#x27;max_tokens&#x27;: 600}"/>
<ClassPropertyRef name='request_options' details='{&quot;title&quot;: &quot;Request Options&quot;, &quot;description&quot;: &quot;Options for the HTTP request to the LLM.&quot;, &quot;default&quot;: {&quot;timeout&quot;: 300, &quot;verify_ssl&quot;: null, &quot;ca_bundle_path&quot;: null, &quot;proxy&quot;: null, &quot;headers&quot;: null}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/RequestOptions&quot;}]}' required={false} default="{&#x27;timeout&#x27;: 300, &#x27;verify_ssl&#x27;: None, &#x27;ca_bundle_path&#x27;: None, &#x27;proxy&#x27;: None, &#x27;headers&#x27;: None}"/>
<ClassPropertyRef name='prompt_templates' details='{&quot;title&quot;: &quot;Prompt Templates&quot;, &quot;description&quot;: &quot;A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \&quot;edit\&quot; key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.&quot;, &quot;default&quot;: {}, &quot;type&quot;: &quot;object&quot;}' required={false} default="{}"/>
<ClassPropertyRef name='api_key' details='{&quot;title&quot;: &quot;Api Key&quot;, &quot;description&quot;: &quot;The API key for the LLM provider.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='api_base' details='{&quot;title&quot;: &quot;Api Base&quot;, &quot;description&quot;: &quot;The base URL of the LLM API.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>

View File

@ -1,38 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# QueuedLLM
QueuedLLM exists to make up for LLM servers that cannot handle multiple requests at once. It uses a lock to ensure that only one request is being processed at a time.
If you are already using another LLM class and are experiencing this problem, you can just wrap it with the QueuedLLM class like this:
```python title="~/.continue/config.py"
from continuedev.libs.llm.queued import QueuedLLM
config = ContinueConfig(
...
models=Models(
default=QueuedLLM(llm=<OTHER_LLM_CLASS>)
)
)
```
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/libs/llm/queued.py)
## Properties
<ClassPropertyRef name='llm' details='{&quot;title&quot;: &quot;Llm&quot;, &quot;description&quot;: &quot;The LLM to wrap with a lock&quot;, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/LLM&quot;}]}' required={true} default=""/>
### Inherited Properties
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;description&quot;: &quot;A title that will identify this model in the model selection dropdown&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='unique_id' details='{&quot;title&quot;: &quot;Unique Id&quot;, &quot;description&quot;: &quot;The unique ID of the user.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='model' details='{&quot;title&quot;: &quot;Model&quot;, &quot;description&quot;: &quot;The name of the model to be used (e.g. gpt-4, codellama)&quot;, &quot;default&quot;: &quot;queued&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="queued"/>
<ClassPropertyRef name='system_message' details='{&quot;title&quot;: &quot;System Message&quot;, &quot;description&quot;: &quot;A system message that will always be followed by the LLM&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='context_length' details='{&quot;title&quot;: &quot;Context Length&quot;, &quot;description&quot;: &quot;The maximum context length of the LLM in tokens, as counted by count_tokens.&quot;, &quot;default&quot;: 2048, &quot;type&quot;: &quot;integer&quot;}' required={false} default="2048"/>
<ClassPropertyRef name='completion_options' details='{&quot;title&quot;: &quot;Completion Options&quot;, &quot;description&quot;: &quot;Options for the completion endpoint. Read more about the completion options in the documentation.&quot;, &quot;default&quot;: {&quot;temperature&quot;: null, &quot;top_p&quot;: null, &quot;top_k&quot;: null, &quot;presence_penalty&quot;: null, &quot;frequency_penalty&quot;: null, &quot;stop&quot;: null, &quot;max_tokens&quot;: 600}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/BaseCompletionOptions&quot;}]}' required={false} default="{&#x27;temperature&#x27;: None, &#x27;top_p&#x27;: None, &#x27;top_k&#x27;: None, &#x27;presence_penalty&#x27;: None, &#x27;frequency_penalty&#x27;: None, &#x27;stop&#x27;: None, &#x27;max_tokens&#x27;: 600}"/>
<ClassPropertyRef name='request_options' details='{&quot;title&quot;: &quot;Request Options&quot;, &quot;description&quot;: &quot;Options for the HTTP request to the LLM.&quot;, &quot;default&quot;: {&quot;timeout&quot;: 300, &quot;verify_ssl&quot;: null, &quot;ca_bundle_path&quot;: null, &quot;proxy&quot;: null, &quot;headers&quot;: null}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/RequestOptions&quot;}]}' required={false} default="{&#x27;timeout&#x27;: 300, &#x27;verify_ssl&#x27;: None, &#x27;ca_bundle_path&#x27;: None, &#x27;proxy&#x27;: None, &#x27;headers&#x27;: None}"/>
<ClassPropertyRef name='prompt_templates' details='{&quot;title&quot;: &quot;Prompt Templates&quot;, &quot;description&quot;: &quot;A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \&quot;edit\&quot; key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.&quot;, &quot;default&quot;: {}, &quot;type&quot;: &quot;object&quot;}' required={false} default="{}"/>
<ClassPropertyRef name='api_key' details='{&quot;title&quot;: &quot;Api Key&quot;, &quot;description&quot;: &quot;The API key for the LLM provider.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='api_base' details='{&quot;title&quot;: &quot;Api Base&quot;, &quot;description&quot;: &quot;The base URL of the LLM API.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>

View File

@ -1,37 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# ReplicateLLM
Replicate is a great option for newly released language models or models that you've deployed through their platform. Sign up for an account [here](https://replicate.ai/), copy your API key, and then select any model from the [Replicate Streaming List](https://replicate.com/collections/streaming-language-models). Change `~/.continue/config.json` to look like this:
```json title="~/.continue/config.json"
{
"models": [{
"title": "Replicate CodeLLama",
"provider": "replicate",
"model": "codellama-13b",
"api_key": "YOUR_API_KEY"
}]
}
```
If you don't specify the `model` parameter, it will default to `replicate/llama-2-70b-chat:58d078176e02c219e11eb4da5a02a7830a283b14cf8f94537af893ccff5ee781`.
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/libs/llm/replicate.py)
## Properties
### Inherited Properties
<ClassPropertyRef name='api_key' details='{&quot;title&quot;: &quot;Api Key&quot;, &quot;description&quot;: &quot;Replicate API key&quot;, &quot;type&quot;: &quot;string&quot;}' required={true} default=""/>
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;description&quot;: &quot;A title that will identify this model in the model selection dropdown&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='unique_id' details='{&quot;title&quot;: &quot;Unique Id&quot;, &quot;description&quot;: &quot;The unique ID of the user.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='model' details='{&quot;title&quot;: &quot;Model&quot;, &quot;description&quot;: &quot;The name of the model to be used (e.g. gpt-4, codellama)&quot;, &quot;default&quot;: &quot;replicate/llama-2-70b-chat:58d078176e02c219e11eb4da5a02a7830a283b14cf8f94537af893ccff5ee781&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="replicate/llama-2-70b-chat:58d078176e02c219e11eb4da5a02a7830a283b14cf8f94537af893ccff5ee781"/>
<ClassPropertyRef name='system_message' details='{&quot;title&quot;: &quot;System Message&quot;, &quot;description&quot;: &quot;A system message that will always be followed by the LLM&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='context_length' details='{&quot;title&quot;: &quot;Context Length&quot;, &quot;description&quot;: &quot;The maximum context length of the LLM in tokens, as counted by count_tokens.&quot;, &quot;default&quot;: 2048, &quot;type&quot;: &quot;integer&quot;}' required={false} default="2048"/>
<ClassPropertyRef name='completion_options' details='{&quot;title&quot;: &quot;Completion Options&quot;, &quot;description&quot;: &quot;Options for the completion endpoint. Read more about the completion options in the documentation.&quot;, &quot;default&quot;: {&quot;temperature&quot;: null, &quot;top_p&quot;: null, &quot;top_k&quot;: null, &quot;presence_penalty&quot;: null, &quot;frequency_penalty&quot;: null, &quot;stop&quot;: null, &quot;max_tokens&quot;: 600}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/BaseCompletionOptions&quot;}]}' required={false} default="{&#x27;temperature&#x27;: None, &#x27;top_p&#x27;: None, &#x27;top_k&#x27;: None, &#x27;presence_penalty&#x27;: None, &#x27;frequency_penalty&#x27;: None, &#x27;stop&#x27;: None, &#x27;max_tokens&#x27;: 600}"/>
<ClassPropertyRef name='request_options' details='{&quot;title&quot;: &quot;Request Options&quot;, &quot;description&quot;: &quot;Options for the HTTP request to the LLM.&quot;, &quot;default&quot;: {&quot;timeout&quot;: 300, &quot;verify_ssl&quot;: null, &quot;ca_bundle_path&quot;: null, &quot;proxy&quot;: null, &quot;headers&quot;: null}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/RequestOptions&quot;}]}' required={false} default="{&#x27;timeout&#x27;: 300, &#x27;verify_ssl&#x27;: None, &#x27;ca_bundle_path&#x27;: None, &#x27;proxy&#x27;: None, &#x27;headers&#x27;: None}"/>
<ClassPropertyRef name='prompt_templates' details='{&quot;title&quot;: &quot;Prompt Templates&quot;, &quot;description&quot;: &quot;A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \&quot;edit\&quot; key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.&quot;, &quot;default&quot;: {}, &quot;type&quot;: &quot;object&quot;}' required={false} default="{}"/>
<ClassPropertyRef name='api_base' details='{&quot;title&quot;: &quot;Api Base&quot;, &quot;description&quot;: &quot;The base URL of the LLM API.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>

View File

@ -1,35 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# TextGenWebUI
TextGenWebUI is a comprehensive, open-source language model UI and local server. You can set it up with an OpenAI-compatible server plugin, but if for some reason that doesn't work, you can use this class like so:
```json title="~/.continue/config.json"
{
"models": [{
"title": "Text Generation WebUI",
"provider": "text-gen-webui",
"model": "MODEL_NAME"
}]
}
```
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/libs/llm/text_gen_webui.py)
## Properties
<ClassPropertyRef name='streaming_url' details='{&quot;title&quot;: &quot;Streaming Url&quot;, &quot;description&quot;: &quot;URL of your TextGenWebUI streaming server (separate from main server URL)&quot;, &quot;default&quot;: &quot;http://localhost:5005&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="http://localhost:5005"/>
### Inherited Properties
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;description&quot;: &quot;A title that will identify this model in the model selection dropdown&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='unique_id' details='{&quot;title&quot;: &quot;Unique Id&quot;, &quot;description&quot;: &quot;The unique ID of the user.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='model' details='{&quot;title&quot;: &quot;Model&quot;, &quot;description&quot;: &quot;The name of the model to be used (e.g. gpt-4, codellama)&quot;, &quot;default&quot;: &quot;text-gen-webui&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="text-gen-webui"/>
<ClassPropertyRef name='system_message' details='{&quot;title&quot;: &quot;System Message&quot;, &quot;description&quot;: &quot;A system message that will always be followed by the LLM&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='context_length' details='{&quot;title&quot;: &quot;Context Length&quot;, &quot;description&quot;: &quot;The maximum context length of the LLM in tokens, as counted by count_tokens.&quot;, &quot;default&quot;: 2048, &quot;type&quot;: &quot;integer&quot;}' required={false} default="2048"/>
<ClassPropertyRef name='completion_options' details='{&quot;title&quot;: &quot;Completion Options&quot;, &quot;description&quot;: &quot;Options for the completion endpoint. Read more about the completion options in the documentation.&quot;, &quot;default&quot;: {&quot;temperature&quot;: null, &quot;top_p&quot;: null, &quot;top_k&quot;: null, &quot;presence_penalty&quot;: null, &quot;frequency_penalty&quot;: null, &quot;stop&quot;: null, &quot;max_tokens&quot;: 600}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/BaseCompletionOptions&quot;}]}' required={false} default="{&#x27;temperature&#x27;: None, &#x27;top_p&#x27;: None, &#x27;top_k&#x27;: None, &#x27;presence_penalty&#x27;: None, &#x27;frequency_penalty&#x27;: None, &#x27;stop&#x27;: None, &#x27;max_tokens&#x27;: 600}"/>
<ClassPropertyRef name='request_options' details='{&quot;title&quot;: &quot;Request Options&quot;, &quot;description&quot;: &quot;Options for the HTTP request to the LLM.&quot;, &quot;default&quot;: {&quot;timeout&quot;: 300, &quot;verify_ssl&quot;: null, &quot;ca_bundle_path&quot;: null, &quot;proxy&quot;: null, &quot;headers&quot;: null}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/RequestOptions&quot;}]}' required={false} default="{&#x27;timeout&#x27;: 300, &#x27;verify_ssl&#x27;: None, &#x27;ca_bundle_path&#x27;: None, &#x27;proxy&#x27;: None, &#x27;headers&#x27;: None}"/>
<ClassPropertyRef name='prompt_templates' details='{&quot;title&quot;: &quot;Prompt Templates&quot;, &quot;description&quot;: &quot;A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \&quot;edit\&quot; key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.&quot;, &quot;default&quot;: {}, &quot;type&quot;: &quot;object&quot;}' required={false} default="{}"/>
<ClassPropertyRef name='api_key' details='{&quot;title&quot;: &quot;Api Key&quot;, &quot;description&quot;: &quot;The API key for the LLM provider.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='api_base' details='{&quot;title&quot;: &quot;Api Base&quot;, &quot;description&quot;: &quot;URL of your TextGenWebUI server&quot;, &quot;default&quot;: &quot;http://localhost:5000&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="http://localhost:5000"/>

View File

@ -1,35 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# TogetherLLM
The Together API is a cloud platform for running large AI models. You can sign up [here](https://api.together.xyz/signup), copy your API key on the initial welcome screen, and then hit the play button on any model from the [Together Models list](https://docs.together.ai/docs/models-inference). Change `~/.continue/config.json` to look like this:
```json title="~/.continue/config.json"
{
"models": [{
"title": "Together CodeLlama",
"provider": "together",
"model": "codellama-13b",
"api_key": "YOUR_API_KEY"
}]
}
```
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/libs/llm/together.py)
## Properties
### Inherited Properties
<ClassPropertyRef name='api_key' details='{&quot;title&quot;: &quot;Api Key&quot;, &quot;description&quot;: &quot;Together API key&quot;, &quot;type&quot;: &quot;string&quot;}' required={true} default=""/>
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;description&quot;: &quot;A title that will identify this model in the model selection dropdown&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='unique_id' details='{&quot;title&quot;: &quot;Unique Id&quot;, &quot;description&quot;: &quot;The unique ID of the user.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='model' details='{&quot;title&quot;: &quot;Model&quot;, &quot;description&quot;: &quot;The name of the model to be used (e.g. gpt-4, codellama)&quot;, &quot;default&quot;: &quot;togethercomputer/RedPajama-INCITE-7B-Instruct&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="togethercomputer/RedPajama-INCITE-7B-Instruct"/>
<ClassPropertyRef name='system_message' details='{&quot;title&quot;: &quot;System Message&quot;, &quot;description&quot;: &quot;A system message that will always be followed by the LLM&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='context_length' details='{&quot;title&quot;: &quot;Context Length&quot;, &quot;description&quot;: &quot;The maximum context length of the LLM in tokens, as counted by count_tokens.&quot;, &quot;default&quot;: 2048, &quot;type&quot;: &quot;integer&quot;}' required={false} default="2048"/>
<ClassPropertyRef name='completion_options' details='{&quot;title&quot;: &quot;Completion Options&quot;, &quot;description&quot;: &quot;Options for the completion endpoint. Read more about the completion options in the documentation.&quot;, &quot;default&quot;: {&quot;temperature&quot;: null, &quot;top_p&quot;: null, &quot;top_k&quot;: null, &quot;presence_penalty&quot;: null, &quot;frequency_penalty&quot;: null, &quot;stop&quot;: null, &quot;max_tokens&quot;: 600}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/BaseCompletionOptions&quot;}]}' required={false} default="{&#x27;temperature&#x27;: None, &#x27;top_p&#x27;: None, &#x27;top_k&#x27;: None, &#x27;presence_penalty&#x27;: None, &#x27;frequency_penalty&#x27;: None, &#x27;stop&#x27;: None, &#x27;max_tokens&#x27;: 600}"/>
<ClassPropertyRef name='request_options' details='{&quot;title&quot;: &quot;Request Options&quot;, &quot;description&quot;: &quot;Options for the HTTP request to the LLM.&quot;, &quot;default&quot;: {&quot;timeout&quot;: 300, &quot;verify_ssl&quot;: null, &quot;ca_bundle_path&quot;: null, &quot;proxy&quot;: null, &quot;headers&quot;: null}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/RequestOptions&quot;}]}' required={false} default="{&#x27;timeout&#x27;: 300, &#x27;verify_ssl&#x27;: None, &#x27;ca_bundle_path&#x27;: None, &#x27;proxy&#x27;: None, &#x27;headers&#x27;: None}"/>
<ClassPropertyRef name='prompt_templates' details='{&quot;title&quot;: &quot;Prompt Templates&quot;, &quot;description&quot;: &quot;A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \&quot;edit\&quot; key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.&quot;, &quot;default&quot;: {}, &quot;type&quot;: &quot;object&quot;}' required={false} default="{}"/>
<ClassPropertyRef name='api_base' details='{&quot;title&quot;: &quot;Api Base&quot;, &quot;description&quot;: &quot;The base URL for your Together API instance&quot;, &quot;default&quot;: &quot;https://api.together.xyz&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="https://api.together.xyz"/>

View File

@ -1,28 +0,0 @@
import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# Configuration Options
[View the source](https://github.com/continuedev/continue/blob/main/server/continuedev/core/config.py)
## Properties
<ClassPropertyRef name='disallowed_steps' details='{&quot;title&quot;: &quot;Disallowed Steps&quot;, &quot;description&quot;: &quot;Steps that are not allowed to be run, and will be skipped if attempted&quot;, &quot;default&quot;: [], &quot;type&quot;: &quot;array&quot;, &quot;items&quot;: {&quot;type&quot;: &quot;string&quot;}}' required={false} default="[]"/>
<ClassPropertyRef name='allow_anonymous_telemetry' details='{&quot;title&quot;: &quot;Allow Anonymous Telemetry&quot;, &quot;description&quot;: &quot;If this field is set to True, we will collect anonymous telemetry as described in the documentation page on telemetry. If set to False, we will not collect any data.&quot;, &quot;default&quot;: true, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="True"/>
<ClassPropertyRef name='models' details='{&quot;title&quot;: &quot;Models&quot;, &quot;default&quot;: [{&quot;title&quot;: &quot;GPT-4 (trial)&quot;, &quot;provider&quot;: &quot;openai-free-trial&quot;, &quot;model&quot;: &quot;gpt-4&quot;, &quot;api_key&quot;: &quot;&quot;}], &quot;type&quot;: &quot;array&quot;, &quot;items&quot;: {&quot;$ref&quot;: &quot;#/definitions/ModelDescription&quot;}}' required={false} default="[{&#x27;title&#x27;: &#x27;GPT-4 (trial)&#x27;, &#x27;provider&#x27;: &#x27;openai-free-trial&#x27;, &#x27;model&#x27;: &#x27;gpt-4&#x27;, &#x27;api_key&#x27;: &#x27;&#x27;}]"/>
<ClassPropertyRef name='model_roles' details='{&quot;title&quot;: &quot;Model Roles&quot;, &quot;description&quot;: &quot;Roles for models. Each entry should be the title of a model in the models array.&quot;, &quot;default&quot;: {&quot;default&quot;: &quot;GPT-4 (trial)&quot;, &quot;chat&quot;: null, &quot;edit&quot;: null, &quot;summarize&quot;: null}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/ModelRoles&quot;}]}' required={false} default="{&#x27;default&#x27;: &#x27;GPT-4 (trial)&#x27;, &#x27;chat&#x27;: None, &#x27;edit&#x27;: None, &#x27;summarize&#x27;: None}"/>
<ClassPropertyRef name='system_message' details='{&quot;title&quot;: &quot;System Message&quot;, &quot;description&quot;: &quot;A system message that will always be followed by the LLM&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='completion_options' details='{&quot;title&quot;: &quot;Completion Options&quot;, &quot;description&quot;: &quot;Default options for completion. These will be overriden by any options set for a specific model.&quot;, &quot;default&quot;: {&quot;temperature&quot;: null, &quot;top_p&quot;: null, &quot;top_k&quot;: null, &quot;presence_penalty&quot;: null, &quot;frequency_penalty&quot;: null, &quot;stop&quot;: null, &quot;max_tokens&quot;: 600}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/BaseCompletionOptions&quot;}]}' required={false} default="{&#x27;temperature&#x27;: None, &#x27;top_p&#x27;: None, &#x27;top_k&#x27;: None, &#x27;presence_penalty&#x27;: None, &#x27;frequency_penalty&#x27;: None, &#x27;stop&#x27;: None, &#x27;max_tokens&#x27;: 600}"/>
<ClassPropertyRef name='slash_commands' details='{&quot;title&quot;: &quot;Slash Commands&quot;, &quot;description&quot;: &quot;An array of slash commands that let you map custom Steps to a shortcut.&quot;, &quot;default&quot;: [], &quot;type&quot;: &quot;array&quot;, &quot;items&quot;: {&quot;$ref&quot;: &quot;#/definitions/SlashCommand&quot;}}' required={false} default="[]"/>
<ClassPropertyRef name='custom_commands' details='{&quot;title&quot;: &quot;Custom Commands&quot;, &quot;description&quot;: &quot;An array of custom commands that allow you to reuse prompts. Each has name, description, and prompt properties. When you enter /&lt;name&gt; in the text input, it will act as a shortcut to the prompt.&quot;, &quot;default&quot;: [{&quot;name&quot;: &quot;test&quot;, &quot;prompt&quot;: &quot;Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don&#x27;t edit any file.&quot;, &quot;description&quot;: &quot;This is an example custom command. Use /config to edit it and create more&quot;}], &quot;type&quot;: &quot;array&quot;, &quot;items&quot;: {&quot;$ref&quot;: &quot;#/definitions/CustomCommand&quot;}}' required={false} default="[{&#x27;name&#x27;: &#x27;test&#x27;, &#x27;prompt&#x27;: &quot;Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don&#x27;t edit any file.&quot;, &#x27;description&#x27;: &#x27;This is an example custom command. Use /config to edit it and create more&#x27;}]"/>
<ClassPropertyRef name='context_providers' details='{&quot;title&quot;: &quot;Context Providers&quot;, &quot;description&quot;: &quot;A list of ContextProvider objects that can be used to provide context to the LLM by typing &#x27;@&#x27;. Read more about ContextProviders in the documentation.&quot;, &quot;default&quot;: [], &quot;type&quot;: &quot;array&quot;, &quot;items&quot;: {&quot;$ref&quot;: &quot;#/definitions/ContextProviderWithParams&quot;}}' required={false} default="[]"/>
<ClassPropertyRef name='user_token' details='{&quot;title&quot;: &quot;User Token&quot;, &quot;description&quot;: &quot;An optional token to identify the user.&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>
<ClassPropertyRef name='data_server_url' details='{&quot;title&quot;: &quot;Data Server Url&quot;, &quot;description&quot;: &quot;The URL of the server where development data is sent. No data is sent unless you have explicitly set the `user_token` property to a valid token that we have shared.&quot;, &quot;default&quot;: &quot;https://us-west1-autodebug.cloudfunctions.net&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="https://us-west1-autodebug.cloudfunctions.net"/>
<ClassPropertyRef name='disable_summaries' details='{&quot;title&quot;: &quot;Disable Summaries&quot;, &quot;description&quot;: &quot;If set to `True`, Continue will not generate summaries for each Step. This can be useful if you want to save on compute.&quot;, &quot;default&quot;: false, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="False"/>
<ClassPropertyRef name='disable_indexing' details='{&quot;title&quot;: &quot;Disable Indexing&quot;, &quot;description&quot;: &quot;If set to `True`, Continue will not index the codebase. This is mainly used for debugging purposes.&quot;, &quot;default&quot;: false, &quot;type&quot;: &quot;boolean&quot;}' required={false} default="False"/>
<ClassPropertyRef name='retrieval_settings' details='{&quot;title&quot;: &quot;Retrieval Settings&quot;, &quot;description&quot;: &quot;Settings for the retrieval system. Read more about the retrieval system in the documentation.&quot;, &quot;default&quot;: {&quot;n_retrieve&quot;: 50, &quot;n_final&quot;: 10, &quot;use_reranking&quot;: true, &quot;rerank_group_size&quot;: 5, &quot;ignore_files&quot;: [], &quot;openai_api_key&quot;: null, &quot;api_base&quot;: null, &quot;api_type&quot;: null, &quot;api_version&quot;: null, &quot;organization_id&quot;: null}, &quot;allOf&quot;: [{&quot;$ref&quot;: &quot;#/definitions/RetrievalSettings&quot;}]}' required={false} default="{&#x27;n_retrieve&#x27;: 50, &#x27;n_final&#x27;: 10, &#x27;use_reranking&#x27;: True, &#x27;rerank_group_size&#x27;: 5, &#x27;ignore_files&#x27;: [], &#x27;openai_api_key&#x27;: None, &#x27;api_base&#x27;: None, &#x27;api_type&#x27;: None, &#x27;api_version&#x27;: None, &#x27;organization_id&#x27;: None}"/>

View File

@ -1,36 +0,0 @@
---
title: Telemetry
description: Continue collects anonymous usage information
keywords: [telemetry, anonymous, usage info, opt out]
---
# 🦔 Telemetry
## Overview
Continue collects and reports **anonymous** usage information. This data is essential to understanding how we should improve the library. You can opt out of it at any time. We use [Posthog](https://posthog.com/), an open source platform for product analytics, to collect and store the data. You can review the code [here](https://github.com/continuedev/continue/tree/main/server/continuedev/libs/util/telemetry.py).
## What we track
We track
- the steps that are run and their parameters
- whether you accept or reject suggestions (not the code itself)
- the traceback when an error occurs
- the name of your OS
- the name of the default model you configured
All data is anonymous and cleaned of PII before being sent to PostHog.
## How to opt out
The `~/.continue` directory contains a `config.json` file that looks like this:
```json title="~/.continue/config.json"
{
"allow_anonymous_telemetry": true,
...
}
```
You can turn off anonymous telemetry by changing the value of `allow_anonymous_telemetry` to `false`.

View File

@ -1,71 +0,0 @@
---
title: Troubleshooting
description: Troubleshooting while waiting for help during beta / alpha testing
keywords: [reload, delete, manually, logs, server, console]
---
# ❓ Troubleshooting
The Continue VS Code extension is currently in beta, and the Intellij extension is in Alpha. They will attempt to start the Continue Python server locally for you, but sometimes this will fail, causing the "Starting Continue server..." not to disappear, or other hangups. While we are working on fixes to all of these problems, there are a few things you can do to temporarily troubleshoot:
## Reload your editor
#### Intellij
Close out the window and re-open the project. This will give Continue another chance to start the server.
#### VS Code
Open the command palette with cmd+shift+p, then type "Reload Window" and select it. This will give Continue another chance to start the server.
## Kill the existing server
If the above doesn't work, you can try to kill the server manually before reloading.
1. Open any terminal
2. Enter `lsof -i :65432 | grep "(LISTEN)" | awk '{print $2}' | xargs kill -9` to kill the server running on port 65432.
3. Restart your IDE and Continue will attempt to start a fresh server.
## Delete `~/.continue`
To get a completely fresh install of Continue, you can delete the `~/.continue` directory. Note that this will delete your config file and all saved sessions and development data.
## Run the server manually
If none of these work, you can start the server yourself as is explained here: [Running the Continue server manually](./walkthroughs/manually-run-continue.md).
This may be necessary if you have a firewall blocking the server from downloading, are on an air-gapped computer, or are on an OS where the server binary fails to run (e.g. RHEL8).
## Check the server logs
#### Intellij
Open the file `~/.continue/continue.log` where you can view the latest logs at the bottom.
#### VS Code
1. `cmd+shift+p` (MacOS) / `ctrl+shift+p` (Windows)
2. Search for and then select "Continue: View Continue Server Logs"
3. Read the `continue.log` file that has opened
## Check the console logs (VS Code)
If your Continue server is not setting up, try checking the console logs:
1. `cmd+shift+p` (MacOS) / `ctrl+shift+p` (Windows)
2. Search for and then select "Developer: Toggle Developer Tools"
3. This will open the [Chrome DevTools window](https://developer.chrome.com/docs/devtools/)
4. Select the `Console` tab
5. Read the console logs
## Problems with Meilisearch
If you have checked the logs and the problem seems related to Meilisearch, or if context providers aren't working, you can try to manually setup Meilisearch with their instructions [here](https://www.meilisearch.com/docs/learn/getting_started/installation). Once downloaded, you should place the Meilisearch binary at `~/.continue/server/meilisearch` and start it. Once it is running on port 7700, Continue will be able to automatically connect.
## Download an Older Version (VS Code)
If you've tried everything, reported an error, and are waiting to hear back, you can try downloading an older version of the extension. All versions are hosted on the Open VSX Registry [here](https://open-vsx.org/extension/Continue/continue). Once you've downloaded the extension, which will be a .vsix file, you can install it manually by following the instructions [here](https://code.visualstudio.com/docs/editor/extension-gallery#_install-from-a-vsix).
## Still having trouble?
Create a GitHub issue [here](https://github.com/continuedev/continue/issues/new?assignees=&labels=bug&projects=&template=bug-report-%F0%9F%90%9B.md&title=), leaving the details of your problem, and we'll be able to more quickly help you out.

View File

@ -1,64 +0,0 @@
---
title: Codebase Retrieval
description: Talk to your codebase
keywords: [talk, embeddings, codebase, experimental]
---
# Codebase Retrieval
Continue indexes your codebase so that when you input a message using Command+Enter, it can automatically pull in the most relevant context from throughout your workspace. This is done via a combination of embeddings-based retrieval and keyword search. By default, all embeddings are calculated locally with `all-MiniLM-L6-v2` and stored locally in `~/.continue/embeddings`.
The codebase retrieval feature allows the following customization options by editing `config.json` as follows:
```json title="~/.continue/config.json"
{
"retrieval_settings": {
"n_retrieve": 100,
...
}
}
```
### `n_retrieve`
Number of results to initially retrieve from vector database (default: 50)
### `n_final`
Final number of results to use after re-ranking (default: 10)
### `use_reranking`
Whether to use re-ranking, which will allow initial selection of `n_retrieve` results, then will use an LLM to select the top `n_final` results (default: True)
### `rerank_group_size`
Number of results to group together when re-ranking. Each group will be processed in parallel. (default: 5)
### `ignore_files`
Files to ignore when indexing the codebase. You can use glob patterns, such as `**/*.py`. This is useful for directories that contain generated code, or other directories that are not relevant to the codebase. (default: [])
### `openai_api_key`
OpenAI API key. If set, Continue will calculate embeddings by calling OpenAI's `ada-002` embeddings API. (default: None)
### Azure OpenAI
These settings allow you to connect to an Azure-hosted OpenAI API. All must be filled out in order to use Azure OpenAI for embeddings, as well as the `openai_api_key`.
#### `api_base`
OpenAI API base URL (default: None)
#### `api_type`
OpenAI API type (default: None)
#### `api_version`
OpenAI API version (default: None)
#### `organization_id`
OpenAI organization ID (default: None)

View File

@ -1,85 +0,0 @@
---
title: Using Code Llama with Continue
description: How to use Code Llama with Continue
keywords: [code llama, meta, togetherai, ollama, replciate, fastchat]
---
# Using Code Llama with Continue
With Continue, you can use Code Llama as a drop-in replacement for GPT-4, either by running locally with Ollama or GGML or through Replicate.
If you haven't already installed Continue, you can do that [here](https://marketplace.visualstudio.com/items?itemName=Continue.continue). For more general information on customizing Continue, read [our customization docs](../customization/overview.md).
## TogetherAI
1. Create an account [here](https://api.together.xyz/signup)
2. Copy your API key that appears on the welcome screen
3. Update your Continue config file like this:
```json title="~/.continue/config.json"
{
"models": [
{
"title": "Code Llama",
"provider": "together",
"model": "togethercomputer/CodeLlama-13b-Instruct",
"api_key": "<API_KEY>"
}
]
}
```
## Ollama
1. Download Ollama [here](https://ollama.ai/) (it should walk you through the rest of these steps)
2. Open a terminal and run `ollama run codellama`
3. Change your Continue config file like this:
```json title="~/.continue/config.json"
{
"models": [
{
"title": "Code Llama",
"provider": "ollama",
"model": "codellama-7b"
}
]
}
```
## Replicate
1. Get your Replicate API key [here](https://replicate.ai/)
2. Change your Continue config file like this:
```json title="~/.continue/config.json"
{
"models": [
{
"title": "Code Llama",
"provider": "replicate",
"model": "codellama-7b",
"api_key": "<API_KEY>"
}
]
}
```
## FastChat API
1. Setup the FastChat API (https://github.com/lm-sys/FastChat) to use one of the Codellama models on Hugging Face (e.g: codellama/CodeLlama-7b-Instruct-hf).
2. Start the OpenAI compatible API (ref: https://github.com/lm-sys/FastChat/blob/main/docs/openai_api.md).
3. Change your Continue config file like this:
```json title="~/.continue/config.json"
{
"models": [
{
"title": "Code Llama",
"provider": "openai",
"model": "codellama-7b",
"api_base": "http://localhost:8000/v1"
}
]
}
```

View File

@ -1,277 +0,0 @@
---
title: Config File Migration
description: Migrating from config.py to config.json
keywords: [json, config, configuration, migration]
---
# Migration to `config.json`
On November 20, 2023, we migrated to using JSON as the primary config file format. If you previously used Continue, we will have attempted to automatically translate your existing config.py into a config.json file. If this fails, we will fallback to a default config.json. Your previous config.py will still be kept, but moved to config.py.old for reference. Below you can find a list of changes that were made in case you need to manually migrate your config, as well as examples of proper config.json files.
The JSON format provides stronger guiderails, making it easier to write a valid config, while still allowing Intellisense in VS Code.
If you need any help migrating, please reach out to us on Discord.
## Configuration as Code
For configuration that requires code, we now provide a simpler interface that works alongside config.json. In the same folder, `~/.continue`, create a file named `config.py` (the same name as before) and add a function called `modify_config`. This function should take a [`ContinueConfig`](https://github.com/continuedev/continue/blob/main/server/continuedev/core/config.py) object as its only argument, and return a `ContinueConfig` object. This object is essentially the same as the one that was previously defined in `config.py`. This allows you to modify the initial configuration object defined in your `config.json`. Here's an example that cuts the temperature in half:
```python
from continuedev.core.config import ContinueConfig
def modify_config(config: ContinueConfig) -> ContinueConfig:
config.completion_options.temperature /= 2
return config
```
To summarize, these are the steps taken to load your configuration:
1. Load `~/.continue/config.json`
2. Convert this into a `ContinueConfig` object
3. If `~/.continue/config.py` exists and has defined `modify_config` correctly, call `modify_config` with the `ContinueConfig` object to generate the final configuration
## List of Changes
### `completion_options`
The properties `top_p`, `top_k`, `temperature`, `presence_penalty`, and `frequency_penalty` have been moved into a single object called `completion_options`. It can be specified at the top level of the config or within a `models` object.
### `request_options`
The properties `timeout`, `verify_ssl`, `ca_bundle_path`, `proxy`, and `headers` have been moved into a single object called `request_options`, which can be specified for each `models` object.
### The `model` property
Instead of writing something like `Ollama(model="phind-codellama:34b", ...)`, where the `model` property was different depending on the provider and had to be exactly correct, we now offer a default set of models, including the following:
```python
# OpenAI
"gpt-3.5-turbo",
"gpt-3.5-turbo-16k",
"gpt-4",
"gpt-3.5-turbo-0613",
"gpt-4-32k",
"gpt-4-1106-preview",
# Open-Source
"mistral-7b",
"llama2-7b",
"llama2-13b",
"codellama-7b",
"codellama-13b",
"codellama-34b",
"phind-codellama-34b",
"wizardcoder-7b",
"wizardcoder-13b",
"wizardcoder-34b",
"zephyr-7b",
"codeup-13b",
"deepseek-1b",
"deepseek-7b",
"deepseek-33b",
# Anthropic
"claude-2",
# Google PaLM
"chat-bison-001",
```
If you want to use a model not listed here, you can still do that by specifying whichever value of `model` you need. But if there's something you think we should add as a default, let us know!
### Prompt template auto-detection
Based on the `model` property, we now attempt to [autodetect](https://github.com/continuedev/continue/blob/108e00c7db9cad110c5df53bdd0436b286b92466/server/continuedev/core/config_utils/shared.py#L38) the prompt template. If you want to be explicit, you can select one of our prompt template types (`"llama2", "alpaca", "zephyr", "phind", "anthropic", "chatml", "deepseek"`) or write a custom prompt template in `config.py`.
### `PromptTemplate`
If you were previously using the `PromptTemplate` class in your `config.py` to write a custom template, we have moved it from `continuedev.libs.llm.base` to `continuedev.models.llm`.
## Examples of `config.json`
After the "Full example" these examples will only show the relevant portion of the config file.
### Full example, with OpenAI Free Trial
```json
{
"models": [
{
"title": "GPT-4",
"provider": "openai-free-trial",
"model": "gpt-4"
},
{
"title": "GPT-3.5-Turbo",
"provider": "openai-free-trial",
"model": "gpt-3.5-turbo"
}
],
"system_message": "Always be kind",
"completion_options": {
"temperature": 0.5
},
"model_roles": {
"default": "GPT-4",
"summarize": "GPT-3.5-Turbo"
},
"slash_commands": [
{
"name": "edit",
"description": "Edit highlighted code",
"step": "EditHighlightedCodeStep"
},
{
"name": "config",
"description": "Customize Continue",
"step": "OpenConfigStep"
},
{
"name": "comment",
"description": "Write comments for the highlighted code",
"step": "CommentCodeStep"
},
{
"name": "share",
"description": "Download and share this session",
"step": "ShareSessionStep"
},
{
"name": "cmd",
"description": "Generate a shell command",
"step": "GenerateShellCommandStep"
}
],
"custom_commands": [
{
"name": "test",
"prompt": "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
"description": "Write unit tests for highlighted code"
}
],
"context_providers": [{ "name": "terminal" }, { "name": "diff" }]
}
```
### Ollama with CodeLlama 13B
```json
{
"models": [
{
"title": "Ollama",
"provider": "ollama",
"model": "codellama-13b"
}
]
}
```
### Claude 2
```json
{
"models": [
{
"title": "Claude-2",
"provider": "anthropic",
"model": "claude-2",
"api_key": "sk-ant-api03-REST_OF_API_KEY",
"context_length": 100000
}
]
}
```
### LM Studio with Phind Codellama 34B
```json
{
"models": [
{
"title": "LM Studio",
"provider": "lmstudio",
"model": "phind-codellama-34b"
}
]
}
```
### OpenAI-compatible API
This is an example of serving a model using an OpenAI-compatible API on http://localhost:8000.
```json
{
"models": [
{
"title": "OpenAI-compatible API",
"provider": "openai",
"model": "codellama-13b",
"api_base": "http://localhost:8000"
}
]
}
```
### Azure OpenAI
```json
{
"models": [
{
"title": "Azure OpenAI",
"provider": "openai",
"model": "gpt-3.5-turbo",
"api_key": "my-api-key",
"api_base": "https://my-azure-openai-instance.openai.azure.com/",
"engine": "my-azure-openai-deployment",
"api_version": "2023-07-01-preview",
"api_type": "azure"
}
]
}
```
### TogetherAI
```json
{
"models": [
{
"title": "Phind CodeLlama",
"provider": "together",
"model": "phind-codellama-34b",
"api_key": "<your-api-key>"
}
]
}
```
### Temperature, top_p, etc...
The `completions_options` for each model will override the top-level `completion_options`. For example, the "GPT-4" model here will have a temperature of 0.8, while the "GPT-3.5-Turbo" model will have a temperature of 0.5.
```json
{
"models": [
{
"title": "GPT-4",
"provider": "openai-free-trial",
"model": "gpt-4",
"completion_options": {
"top_p": 0.9,
"top_k": 40,
"temperature": 0.8
}
},
{
"title": "GPT-3.5-Turbo",
"provider": "openai-free-trial",
"model": "gpt-3.5-turbo"
}
],
"completion_options": {
"temperature": 0.5,
"presence_penalty": 0.5,
"frequency_penalty": 0.5
}
}
```

View File

@ -1,42 +0,0 @@
---
title: Headless Mode
description: Running Continue in the background
keywords: [headless, async, background, ci/cd]
---
# Headless Mode
"Headless mode" allows Continue to run in the background, without needing to be connected to the IDE or GUI. This is useful for performing refactors or other long-running tasks asynchronously. Headless mode can also be run in CI/CD for example to perform a thorough review for errors.
To use headless mode:
1. `pip install continuedev` (using a virtual environment is recommended)
2. Import `continuedev` and call `run` with the `Step` you would like to run
Example:
Say you have the following file (`/path/to/file.py`):
```python
def say_hello(name: str):
print(f"Hello, {name}")
```
and this function is imported and used in multiple places throughout your codebase. But the name parameter is new, and you need to change the function call everywhere it is used. You can use the script below to edit all usages of the function in your codebase:
```python
from continuedev import run
from continuedev.models.main import Position, PositionInFile
from continuedev.plugins.steps.refactor import RefactorReferencesStep
step = RefactorReferencesStep(
user_input="",
symbol_location=PositionInFile(
filepath="/path/to/file.py",
position=Position(line=0, character=5),
),
)
run(step)
```
Here we use Continue's built-in `RefactorReferencesStep`. By passing it the location (filepath and position) of the symbol (function, variable, etc.) that we want to update, Continue will automatically find all references to that symbol and prompt an LLM to make the edit requested in the `user_input` field.

View File

@ -1,54 +0,0 @@
---
title: Manually Run Continue
description: How to run Continue manually
keywords: [manual, firewall, vpn, air-gapped, self-host]
---
# Manually Run Continue
You might want to run Continue manually if
(a) a firewall, VPN, or other issue is stopping Continue from automatically downloading the server binary,
(b) you are on an OS where the binary fails to run (e.g. RHEL8),
(c) you are using an air-gapped computer,
(d) you want to self-host Continue, or
(e) you want to run from source while developing / modifying Continue's code.
In all cases, you should go to VS Code settings, search "continue" and check the box that says "Manually Running Server". This will stop Continue from trying to kill and redownload the server binary.
Next, you'll just need to start a server on your own, and then reload the VS Code window. Below are the 4 ways you can start a server.
## (Recommended) Use the `continuedev` PyPI package
The easiest way to run Continue is to
1. Download the `continuedev` PyPI package by running `pip install continuedev`
2. Start the server by running `python -m continuedev` in your terminal
## Download the server binary
If you'd like to use a pre-built binary, you can download manually from our S3 bucket. These are the download links for each OS:
- [MacOS (Intel)](https://continue-server-binaries.s3.us-west-1.amazonaws.com/mac/continue_server)
- [MacOS (Apple Silicon)](https://continue-server-binaries.s3.us-west-1.amazonaws.com/apple-silicon/continue_server)
- [Windows](https://continue-server-binaries.s3.us-west-1.amazonaws.com/windows/continue_server.exe)
- [Linux](https://continue-server-binaries.s3.us-west-1.amazonaws.com/linux/continue_server)
Once downloaded, start the binary by running `./continue_server` (MacOS/Linux) or `./continue_server.exe` (Windows) in the directory where you downloaded it. You should see that it begins listening on port 65432.
## Build the server binary from source
If you don't want to use the PyPI package, but need a version of Continue that works on an OS not listed above, then you can build the server binary from source.
1. Clone the [Continue repo](https://github.com/continuedev/continue)
2. Change directories into the repo: `cd continue`
3. Run the build script: `sh build.sh` (or `sh build.sh m1` if building for an M1 Mac, or `build.cmd` if on Windows without WSL)
4. Now that the binary is outputted in the `./dist` folder, start the server by running `./dist/continue_server`. You should see that it begins listening on port 65432.
## Run the server from source
If you want to develop or modify Continue's code, you can run the server from source. To do this, follow the instructions on development setup in our [CONTRIBUTING.md](https://github.com/continuedev/continue/blob/main/CONTRIBUTING.md#environment-setup).

View File

@ -1,16 +0,0 @@
---
title: Running Continue without Internet
description: How to run Continue without Internet
keywords: [no internet, air-gapped, local model]
---
# Running Continue without Internet
Continue can be run even on an air-gapped computer if you use a local model. You'll have to make a few adjustments for this to work.
1. Download the latest .vsix file from the [Open VSX Registry](https://open-vsx.org/extension/Continue/continue) and [install it to VS Code](https://code.visualstudio.com/docs/editor/extension-marketplace#_install-from-a-vsix).
2. In VS Code settings, search "continue" and check the box that says "Manually Running Server". This will stop Continue from trying to kill and redownload the server binary.
3. Follow instructions to [run Continue manually](./manually-run-continue.md).
4. Open `~/.continue/config.json` and set `"allow_anonymous_telemetry": false`. This will stop Continue from attempting requests to PostHog.
5. Also in `config.json`, set the default model to a local model. You can read about the available options [here](../customization/models.md).
6. Restart VS Code to ensure that the changes to `config.json` take effect.

View File

@ -1,165 +0,0 @@
// @ts-check
// Note: type annotations allow type checking and IDEs autocompletion
const lightCodeTheme = require("prism-react-renderer/themes/github");
const darkCodeTheme = require("prism-react-renderer/themes/dracula");
/** @type {import('@docusaurus/types').Config} */
const config = {
title: "Continue",
tagline:
"the open-source library for accelerating software development with language models",
favicon: "img/favicon.ico",
// Set the production url of your site here
url: "https://continue.dev",
// Set the /<baseUrl>/ pathname under which your site is served
// For GitHub pages deployment, it is often '/<projectName>/'
baseUrl: "/docs",
// GitHub pages deployment config.
// If you aren't using GitHub pages, you don't need these.
organizationName: "continuedev", // Usually your GitHub org/user name.
projectName: "continue", // Usually your repo name.
onBrokenLinks: "throw",
onBrokenMarkdownLinks: "warn",
// Even if you don't use internalization, you can use this field to set useful
// metadata like html lang. For example, if your site is Chinese, you may want
// to replace "en" with "zh-Hans".
i18n: {
defaultLocale: "en",
locales: ["en"],
},
presets: [
[
"classic",
/** @type {import('@docusaurus/preset-classic').Options} */
({
docs: {
routeBasePath: "/",
sidebarPath: require.resolve("./sidebars.js"),
editUrl: "https://github.com/continuedev/continue/tree/main/docs",
},
theme: {
customCss: require.resolve("./src/css/custom.css"),
},
gtag: {
trackingID: "G-M3JWW8N2XQ",
},
}),
],
],
themeConfig:
/** @type {import('@docusaurus/preset-classic').ThemeConfig} */
({
metadata: [{name: 'keywords', content: 'open source, ai, vscode, intellij, jetbrains, developer tools, chatgpt, copilot, llm'}],
// Replace with your project's social card
image: "img/continue-social-card.png",
navbar: {
title: "Continue",
logo: {
alt: "Continue Logo",
src: "img/logo.png",
href: "https://continue.dev",
},
items: [
{
type: "docSidebar",
sidebarId: "docsSidebar",
position: "left",
label: "Docs",
},
{
href: "https://github.com/continuedev/continue",
label: "GitHub",
position: "right",
},
],
},
footer: {
style: "dark",
links: [
{
title: "Docs",
items: [
{
label: "Introduction",
to: "/intro",
},
{
label: "VS Code",
to: "https://marketplace.visualstudio.com/items?itemName=Continue.continue",
},
],
},
{
title: "Community",
items: [
{
label: "Discord",
href: "https://discord.gg/vapESyrFmJ",
},
{
label: "Twitter",
href: "https://twitter.com/continuedev",
},
],
},
{
title: "More",
items: [
{
label: "GitHub",
href: "https://github.com/continuedev/continue",
},
{
label: "Website",
href: "https://continue.dev",
},
],
},
],
copyright: `Copyright © ${new Date().getFullYear()} Continue Dev, Inc.`,
},
prism: {
theme: lightCodeTheme,
darkTheme: darkCodeTheme,
},
algolia: {
// The application ID provided by Algolia
appId: "0OMUMCQZVV",
// Public API key: it is safe to commit it
apiKey: "6795de0f612eebe17018f8061a9ef18e",
indexName: "continue",
// Optional: see doc section below
contextualSearch: true,
},
}),
plugins: [
[
"@docusaurus/plugin-client-redirects",
{
redirects: [
// Redirects from old docs
{
from: "/customization",
to: "/customization/overview",
},
{
from: "/getting-started",
to: "/quickstart",
},
],
},
],
],
};
module.exports = config;

View File

@ -1,4 +0,0 @@
[[redirects]]
from = "/"
to = "/docs/intro"
force = true

21872
src/docs/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@ -1,45 +0,0 @@
{
"name": "continue-docs",
"version": "0.0.0",
"private": true,
"scripts": {
"docusaurus": "docusaurus",
"start": "docusaurus start",
"build": "docusaurus build",
"build:netlify": "docusaurus build --out-dir build/docs",
"swizzle": "docusaurus swizzle",
"deploy": "docusaurus deploy",
"clear": "docusaurus clear",
"serve": "docusaurus serve",
"write-translations": "docusaurus write-translations",
"write-heading-ids": "docusaurus write-heading-ids"
},
"dependencies": {
"@docusaurus/core": "2.4.0",
"@docusaurus/plugin-client-redirects": "2.4.0",
"@docusaurus/preset-classic": "2.4.0",
"@mdx-js/react": "^1.6.22",
"clsx": "^1.2.1",
"prism-react-renderer": "^1.3.5",
"react": "^17.0.2",
"react-dom": "^17.0.2"
},
"devDependencies": {
"@docusaurus/module-type-aliases": "2.4.0"
},
"browserslist": {
"production": [
">0.5%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
},
"engines": {
"node": ">=16.14"
}
}

View File

@ -1,66 +0,0 @@
/**
* Creating a sidebar enables you to:
- create an ordered group of docs
- render a sidebar for each doc of that group
- provide next/previous navigation
The sidebars can be generated from the filesystem, or explicitly defined here.
Create as many sidebars as you want.
*/
// @ts-check
/** @type {import('@docusaurus/plugin-content-docs').SidebarsConfig} */
const sidebars = {
docsSidebar: [
"intro",
"quickstart",
"how-to-use-continue",
"how-continue-works",
{
type: "category",
label: "🎨 Customization",
collapsible: true,
collapsed: true,
items: [
"customization/overview",
"customization/models",
"customization/context-providers",
"customization/slash-commands",
"customization/other-configuration",
],
},
{
type: "category",
label: "🚶 Walkthroughs",
collapsible: true,
collapsed: true,
items: [
"walkthroughs/codellama",
"walkthroughs/manually-run-continue",
"walkthroughs/running-continue-without-internet",
"walkthroughs/headless-mode",
"walkthroughs/codebase-embeddings",
"walkthroughs/config-file-migration",
],
},
"development-data",
"telemetry",
"troubleshooting",
{
type: "category",
label: "📖 Reference",
collapsible: true,
collapsed: true,
items: [
{
type: "autogenerated",
dirName: "reference",
},
],
},
],
};
module.exports = sidebars;

Some files were not shown because too many files have changed in this diff Show More