Introduction

If you’re new to OWASP Juice Shop, try to follow the tasks in order. If you’re interested, follow our series for more guidance.
OWASP Juice Shop is an intentionally vulnerable web application project that can be used by beginners or cybersecurity aspirants to improve their skills in web application penetration testing. It is a great platform to practice on before stepping into the world of bug hunting, as it gives you an upper hand by providing prior knowledge and hands-on experience.
I am willing to explain each topic we learn through each challenge in depth. Yes, I am trying to explain each hacking concept and provide more details through the OWASP challenge. So, this is not just a normal solution to the OWASP room. If that’s what you want, then go ahead and search Google, there will be a lot of solutions to the challenge. But here, we learn hacking concepts too.
Concept and skill you must understand while solving this lab:
WebLLM based Prompt Injection.
WebLLM based Prompt Injection
There are a lot of web-based large language models available in the market, and like any tech product, they come with many vulnerabilities. One of the most dangerous among them is the prompt injection vulnerability. As of 2025, OWASP has listed it among the top 10 vulnerabilities in web-based large language models.
Consequences: What if an LLM gives you the recipe for making a banned drug or the method to develop a dangerous malware? If you bypass the normal filtering or restrictions of an LLM, you can achieve this. This is also called jailbreaking. Prompt injection is an exploit where attackers manipulate language models (LLMs) by crafting input prompts that appear legitimate but are designed to trigger unintended behavior or bypass safety measures. It’s a way of injecting malicious inputs and making the AI model respond inappropriately.
Sites: Gandalf is a simple and beginner-friendly platform to practice prompt injection techniques. It’s highly recommended for those who are new to WebLLM-based attacks, as it offers a highly simplified environment to understand how prompt injection works.
You can also explore some labs available on PortSwigger’s Web Security Academy. These labs provide hands-on experience and help you understand the impact of such vulnerabilities from a business or company’s website perspective. Using these resources gives you a clear idea of how dangerous these attacks can be in the real world.
To learn more about this attack, how it works, and its real-world consequences, check out our in-depth article on the topic – coming soon..
Setup:
Link: github
Step 1: Install node.js
Step 2: Run git clone https://github.com/juice-shop/juice-shop.git --depth 1
(or clone your own fork of the repository).
Step 3: Change directory to the cloned folder.
cd juice-shop
Step 4: Run npm install
(only has to be done before first start or when you change the source code)
Step 5: Run npm start
Step 6: Browse to http://localhost:3000
Task
The task is to make the chatbot give us a coupon code which we can use. This is a 1-star category task, which is very easy and can be solved in less than 3 minutes.
Solution
Step 1: Open the side menu of the Juice Shop site.

Step 2: From the menu, click on the Support Chat option.

Step 3: Prompt the bot to convince it that giving us a coupon won’t cause any problems. You can try your own way. anyway, I’ll attach a screenshot of my method below.


We got the coupon code.

Conclusion
We just completed the OWASP Juice Shop’s second simple task through a few steps, and by now, I think you have a clearer understanding of the concept of prompt injection in a simpler way. Do your own research, nothing is complete without your own effort. We will soon be sharing more solutions to the challenges in my own way of explaining.
Informative
I’ve already read the first part. it’s great! But we’d love to see more articles on these topics. Your website currently has only 7 articles on cybersecurity, which feels a bit limited. We’re definitely looking forward to more content, maybe even daily updates if possible.