TOM, the most complete client for ChatGPT's API
The OpenAI API for ChatGPT is now public, and with TOM, you can unleash the power of GPT-4 Turbo and GPT-4 Vision on your mobile device.
Talk directly to GPT 4, start a discussion, or take photos and ask questions about them. You can speak in any language, TOM understands them all.
Change the way TOM behaves by tapping on the system prompt. Make it play any role you want.
Enjoy the most accurate voice recognition with OpenAI's Whisper, and perfectly human speech with OpenAI's TTS. Alternatively, keep them disabled and use Google's services for lower latency and costs, and a faster user experience.
You can also use GPT 3.5 Turbo for quicker responses and to minimize costs.
TOM is free and will always be. But to make use of the AI you'll need an API key from the AI owner, OpenAI.
A GPT API client
You don't need a monthly subscription to enjoy GPT 4 Turbo or GPT 4 Vision: just an API key. And the good news is API keys are free on OpenAI's site. Here’s how to get started:
1. Create your API key on https://platform.openai.com/api-keys
2. Use your API key in TOM to unleash THE BEAST
If at any time you need to update or change the API key you're using, tap on the KEY button.
Controls
Use the selector on top to switch between GPT-3.5 Turbo and GPT-4 Turbo to manage your costs or for a quicker response. GPT-4 Vision is automatically selected whenever you take a photograph.
Tap on Tom's description to set your own system prompt. It will guide GPT on how to interact with you.
Tap on the SPEAK button to talk to GPT.
Tap on the CAMERA button to take a picture and ask anything about it.
You can continue discussing that photo by tapping on 'SPEAK' afterwards.
However, your CONTEXT will grow.
What's the context?
The context includes everything said in your current conversation, including pictures taken. It's sent to the API each time, as that's how GPT remembers it.
It grows with every new sentence and especially with each new picture. The larger the context sent to the API, the longer the response time. And importantly, OpenAI charges based on the size of your context.
To find the right balance, TOM provides the ability to clear the context whenever it becomes particularly heavy, although GPT will then forget all previous interactions. Use the BIN button for this purpose.
Image sizes
TOM offers three settings for pictures sent to GPT: fast, medium, and quality.
'Fast' is the default, providing smaller images for quicker interaction with GPT. It works well with texts and most types of images.
'Medium' offers more detail but results in slightly larger images.
Use 'quality' for the most accuracy. These images are the heaviest and most costly in the OpenAI API.
Whisper and TTS
Whisper is an OpenAI neural net that approaches human-level robustness and accuracy in speech recognition. If enabled, you'll enjoy extra accuracy in voice recognition that TOM sends to GPT, but at an additional cost.
TTS (Text-to-Speech) is an OpenAI system that turns text into lifelike spoken audio. It also incurs additional costs.
Both options are enabled by default for a better user experience. But both can be disabled to get quicker responses in case of slow networks, or to reduce your costs. However, with both enabled, the experience is truly awesome.Date de mise à jour
19 dsb 2023