43 lines
1.3 KiB
Markdown
43 lines
1.3 KiB
Markdown
# BoneMeal
|
||
How to run:
|
||
- Download Llama 3.1-8B-Instruct
|
||
- Insert your HuggingFace token in server.py line 9:
|
||
```python
|
||
# Insert your HF token here:
|
||
login("<HF-token-here>")
|
||
```
|
||
|
||
## Environment Setup for PlantCV & Stable Diffusion
|
||
|
||
### Why two environments?
|
||
- **PlantCV 4.8** requires **NumPy ≥ 2.0**.
|
||
- **Stable Diffusion / Diffusers** (which uses Torch, Accelerate, ecc.) requires **NumPy < 2.0** instead.
|
||
|
||
These requirements are **incompatible in the same Python environment**, so the safest way to work is to have **two separate virtual environments**.
|
||
|
||
---
|
||
|
||
## Environment 1 – PlantCV (image analysis)
|
||
|
||
This environment is used for everything AI-free. It requires Anaconda3, but the environment set-up is in the omonymous yamn file
|
||
|
||
### Create the environment
|
||
```bash
|
||
conda create -f env.yaml
|
||
conda activate myEnv
|
||
```
|
||
|
||
The required modules can be found in the file ```requirements_plantcv.txt".
|
||
|
||
## 🖼Environment 2 – Stable Diffusion (image generation)
|
||
|
||
This environment is used to generate realistic images (future projections of the plant) based on the JSON produced by PlantCV.
|
||
|
||
### Create the environment
|
||
```bash
|
||
python -m venv env_sd
|
||
env_sd\Scripts\activate # Windows
|
||
source env_sd/bin/activate # Linux/Mac
|
||
```
|
||
|
||
The required modules can be found in the file ```requirements_stableDiff.txt```
|