This is a Node.js command-line interface (CLI) tool created by Arindam that interacts with the Google Generative AI API to generate content based on user input. The tool allows you to ask questions directly or provide context from files, directories, or PDFs.
To install this package, you need to have Node.js and npm installed. You can install the package globally using npm:
npm install -g gen-ai-chatTo ask a question directly from the command line:
npx gen-ai-chat "Your question here"To provide additional context from a file, use the -f flag followed by the file path:
npx gen-ai-chat "Your question here" -f /path/to/your/file.txtTo provide additional context from all files in a directory, use the -d flag followed by the directory path:
npx gen-ai-chat "Your question here" -d /path/to/your/directoryTo provide additional context from a PDF file, use the --pdf or -p flag followed by the file path:
npx gen-ai-chat "Your question here" --pdf /path/to/your/file.pdfTo start the tool in interactive mode, where you can ask multiple questions in a session:
npx gen-ai-chat -ior
npx gen-ai-chat --interactiveIn interactive mode, the prompt gen-ai-chat> will appear, indicating that the tool is ready for you to type your question or command.
To exit interactive mode, type exit or quit and press Enter.
You can choose your favorite model interactively by using the --choose-model option:
npx gen-ai-chat --choose-modelThis command will prompt you to select a model from the available options:
? Please select a model: (Use arrow keys)
❯ gemini-pro
gemini-1.5-flash-latest
gemini-1.5-pro-latest
gemini-pro-vision
text-embedding-004
Alternatively, you can specify the model directly using the --model option followed by the model name:
npx gen-ai-chat "Your question here" --model gemini-1.5-flash-latestBy default, logs are stored in memory. To write the in-memory logs to a file, use the --write-logs option:
npx gen-ai-chat --write-logsYou can provide your own API key by setting the API_KEY environment variable in a .env file:
API_KEY=your_google_gemini_api_keyIf you do not provide your own API key, the tool will use a default key with a request limit of 10 requests per hour.
npx gen-ai-chat "What is the capital of France?"npx gen-ai-chat "Summarize the content of this file" -f ./example.txtnpx gen-ai-chat "Summarize the content of these files" -d ./example-directorynpx gen-ai-chat "Summarize the content of this PDF" --pdf ./example.pdfnpx gen-ai-chat --choose-modelor
npx gen-ai-chat "Your question here" --model gemini-1.5-flash-latestnpx gen-ai-chat --write-logsThis command will write all in-memory logs to a file in the logs directory.
- If you do not provide a question, the CLI will prompt you to ask a question.
- If the file path provided with the
-fflag is invalid, an error message will be displayed. - If the directory path provided with the
-dflag is invalid, an error message will be displayed. - If the request limit is reached and no user API key is provided, a message will be displayed indicating that the request limit has been reached.
- If the selected model fails, the tool will fallback to
gemini-proand try again. - If the API key is missing or invalid, an error message will be displayed.
- If there is a network issue, an error message will be displayed indicating the problem.
- If the response from the API is malformed or unexpected, an error message will be displayed.
- Presence of .env File: The
.envfile must be present in the directory from which thenpxcommand is executed. - Loading Environment Variables: The script uses
require('dotenv').config();to load the environment variables from the.envfile. - Accessing the API Key: The script accesses the API key from
process.env.API_KEY. - Model Fallback: If the selected model fails, the tool will fallback to
gemini-proand try again.
By ensuring the .env file is in the correct location and the script is configured to load it, the script will be able to access the API key when run with npx.
This project is licensed under the MIT License. See the LICENSE file for details.
Contributions are welcome! Please open an issue or submit a pull request for any improvements or bug fixes.
For any questions or issues, please open an issue on the GitHub repository.