Update README.md

This commit is contained in:
Giuseppe Berardi 2025-08-02 11:31:48 +00:00
parent cadf3c0af3
commit 1d86eaa021

185
README.md
View file

@ -1,110 +1,159 @@
# WeGrow
\# WeGrow
> *Hackathon Project - [NOI Hackathon] [2025]*
## 🚀 Overview
Hi everyone! We are WeGrow,
> \*Hackathon Project - \[NOI Hackathon] \[2025]\*
## 🎯 Problem Statement
Describe the challenge or problem your team is addressing in this hackathon.
## 💡 Solution
\## 🚀 Overview
Explain your approach and how your solution addresses the problem statement.
## ✨ Features
- [ ] Feature 1
- [ ] Feature 2
- [ ] Feature 3
- [ ] Feature 4
Hi everyone! We are WeGrow, we are working on a project involving a chatbot which will predict the growth and health of plants.
## 🛠️ Tech Stack
**Frontend:**
- Technology 1
- Technology 2
\## 🎯 Problem Statement
**Backend:**
- Technology 1
- Technology 2
**Other Tools:**
The main challenge was to find the right LLM to work in a short time. In particular, to find LLM that parsed text to images and including also meteo api + gps integration to shape the answer of the LLM. We have tested our solution on the basilicum plant, and predicting its growth depending upon different weather condition. There is also the possibility to predict the growth indoor, and not only outside. In that case the user decides the parameters (as humidity, temperature, pressure and so on with a GUI). On the other hand , if the user wants to predict the growth of the plant outside, the LLM will individuate the current weather for the wished period of the user.
- Tool 1
- Tool 2
## 🏗️ Architecture
```text
[Add architecture diagram or description here]
```
As a problem , we faced an issue to describe the final picture obtained through the various LLM, using Llava as the last one. We ran into GPU exaustation problem, withouth a fully understanding of such problem.
\## 💡 Solution
To address the problems of the correct answering we had to reading various articles on the correct open source LLM to use. We used :
\- StableDiffusionInstructPix2PixPipeline.from\_pretrained("timbrooks/instruct-pix2pix") : to have the images + text as input and give an image as output.
To install it I have installed ollama for windows from here :
https://ollama.com/download
\- Llava to generate from the final image a text (unsuccessful). To install it we did:
  git clone https://github.com/haotian-liu/LLaVA.git
  cd LLaVA
  pip install -e .
\- Successfully tried to train a model by crawling images from google of basil and tomaetos and training it, to try to optimize the ouput of StableDiffusionInstructPix2PixPipeline.from\_pretrained("timbrooks/instruct-pix2pix"), without actually many changes.
\## ✨ Features
\- \[ ] Interactive dashboard sliders
\- \[ ] Possibility to decide the environment condititions (open vs. controlled (e.g. laboratory))
\- \[ ] Augmented precision of the data trough the open meteo api and the gps integration
\- \[ ] Possibility with the selection of the local environment to run the application completely local, transforming the application in to a lightweight and robust friend
\## 🛠️ Tech Stack
\*\*Frontend:\*\*
\- Python with tkinter
\*\*Backend:\*\*
\- python scripts
\*\*Other Tools:\*\*
\- Open meteo api
\- All the llms to predict the growth explained later
\### Installation
1\. Clone the repository
## 🚀 Using the application
### Prerequisites
```bash
# List any prerequisites here
# e.g., Node.js 18+, Python 3.9+
```
### Installation
1. Clone the repository
```bash
git clone https://github.com/your-username/NOIProject.git
cd NOIProject
```
1. Install dependencies
1\. Install dependencies
```bash
# Add installation commands here
# e.g., npm install, pip install -r requirements.txt
\# Go into PlantDashboard and create your venv (e.g. PlantEnv)
python3 -m venv PlantEnv
```
1. Set up environment variables
1\. Installing the application
```bash
# Copy example environment file
cp .env.example .env
# Edit .env with your configuration
\#Always in the PlantDashBoard
pip install -r requirements.txt
```
1. Run the application
```bash
# Add run commands here
# e.g., npm start, python app.py
```
## 📸 Screenshots
\## 🚀 Using the application
Add screenshots of your application here
## 🎥 Demo
Add link to demo video or live deployment
After following the installation steps, you can use our application by simply activating the venv with "source PlantEnv/bin/activate" and running "python launcher.py" from inside the PlantDashBoard folder
## 🧑‍💻 Team
Meet our amazing team of 4:
| Name | Role | GitHub | LinkedIn |
|------|------|---------|----------|
| Member 1 | Role | [@username](https://github.com/username) | [LinkedIn](https://linkedin.com/in/username) |
| Member 2 | Role | [@username](https://github.com/username) | [LinkedIn](https://linkedin.com/in/username) |
| Member 3 | Role | [@username](https://github.com/username) | [LinkedIn](https://linkedin.com/in/username) |
| Member 4 | Role | [@username](https://github.com/username) | [LinkedIn](https://linkedin.com/in/username) |
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
---