The "Advanced Master Prompt" for JARVIS (Version 1.5)
Copy and paste the entire block below into Gemini 3 Pro in Google **Role:** You are an expert software architect and front-end developer. Your task is to generate a fully functional, single-page HTML/CSS/JavaScript application for a Personal AI Assistant named "JARVIS."
**Core Concept:** This application is a web-based control hub. It features a futuristic, Iron-Man-inspired UI with a central "activation" mechanism. The assistant listens for voice commands and executes a predefined set of actions.
**Functional Requirements (The "Skills"):**
The assistant must understand and execute the following voice commands with high accuracy. The system should provide visual and auditory feedback for each action.
1. **Wake Word & Activation:**
- The user must click a prominent "ACTIVATE" button (styled like an arc reactor) to start listening.
- Upon activation, the button should change to "DEACTIVATE" (or a listening state), and the interface should indicate that the microphone is active.
- A deactivation method (clicking the same button) must stop the listening loop.
2. **Voice Command Processing:**
- Use the Web Speech API (`webkitSpeechRecognition` or `SpeechRecognition`) for voice capture.
- The assistant should continuously listen while activated, process the command, execute it, and then listen for the next command until deactivated.
- Provide real-time feedback of what the user said (e.g., "You said: [command]").
- Provide feedback on the action being taken (e.g., "Executing: [action]").
3. **Command Execution (The Core Features):**
- **Web Navigation:** If the user says "Open [website name]" (e.g., "Open Google", "Open YouTube"), the system should open `https://www.[websitename].com` in a new browser tab. Handle cases where the user says "Go to" or "Navigate to".
- **YouTube Music/Video:** If the user says "Play [song/video name] on YouTube", the system should search for that query on YouTube and open the results page (e.g., `https://www.youtube.com/results?search_query=[song name]`).
- **WhatsApp Messaging (Web-based):** If the user says "Send message to [contact name] saying [message content]" (e.g., "Send message to John Doe saying I will be late"), the system should:
- Open `https://web.whatsapp.com`.
- Display a clear instruction to the user: "Please scan the QR code to log in to WhatsApp Web. After logging in, I will assist with the message."
- **Note:** Due to browser security restrictions, the app cannot directly type into WhatsApp. Instead, after a 15-second delay (to allow for QR scan), it should open a prepopulated `https://wa.me/` link. To make this work, the user must have previously saved the contact number. The command should be interpreted as "Send message to [contact name]". You will need to implement a simple in-app mapping of names to phone numbers (e.g., an object like `const contacts = {"john doe": "1234567890"};`). If the name is not found, inform the user. If found, open `https://wa.me/[number]?text=[encoded message]`.
- **Time & Date:** If the user asks "What time is it?" or "What's today's date?", the assistant should use JavaScript's `Date` object to speak the answer aloud using the `SpeechSynthesisUtterance` API.
- **Basic System Control Simulation:** If the user says "Increase volume" or "Decrease volume", the assistant should provide visual feedback (e.g., a volume bar on the UI that goes up/down) and a spoken response like "Simulating volume increase." (Do not attempt to actually change system volume due to browser limits).
**User Interface & Experience (UI/UX) Requirements:**
- **Theme:** Dark, futuristic, holographic, and glitchy. Think Iron Man's HUD or a cyberpunk interface.
- **Layout:**
- A central, circular "Arc Reactor" button that serves as the main activation/deactivation toggle. It should glow blue when inactive and pulse red when actively listening.
- A status display area just above or below the button to show current state (e.g., "Status: Listening...", "Status: Idle").
- A transcript area to show the user's spoken command.
- A response/feedback area to show what action the AI is taking.
- A small, stylized log panel to show a history of commands and actions.
- **Typography:** Use monospace or futuristic fonts like 'Orbitron', 'Rajdhani', or 'Share Tech Mono'. Import them from Google Fonts.
- **Animations:** Subtle glitch effects, data scanning lines, and pulsing glows on the main button.
**Technical Implementation Details:**
- Generate a single, self-contained HTML file with `<style>` and `<script>` tags.
- Ensure the JavaScript is robust with error handling (e.g., microphone permissions denied, speech recognition not supported).
- Use `localStorage` or a simple in-memory object to store the contact name-to-number mapping for the WhatsApp feature. Pre-populate it with a few example contacts (e.g., "Mom", "Dad", "Office").
- The assistant should provide spoken feedback for most actions (e.g., "Opening YouTube", "Message sent to Mom").
- The code must be well-commented to explain the logic.
**Goal:** To create a stunning, functional, and impressive demo that feels like a real step towards a personal AI assistant, ready to be tested and iterated upon.


