Thank you for sending your enquiry! One of our team members will contact you shortly.
Thank you for sending your booking! One of our team members will contact you shortly.
課程簡介
Introduction to Vision-Language Models
- Overview of VLMs and their role in multimodal AI
- Popular architectures: CLIP, Flamingo, BLIP, etc.
- Use cases: search, captioning, autonomous systems, content analysis
Preparing the Fine-Tuning Environment
- Setting up OpenCLIP and other VLM libraries
- Dataset formats for image-text pairs
- Preprocessing pipelines for vision and language inputs
Fine-Tuning CLIP and Similar Models
- Contrastive loss and joint embedding spaces
- Hands-on: fine-tuning CLIP on custom datasets
- Handling domain-specific and multilingual data
Advanced Fine-Tuning Techniques
- Using LoRA and adapter-based methods for efficiency
- Prompt tuning and visual prompt injection
- Zero-shot vs. fine-tuned evaluation trade-offs
Evaluation and Benchmarking
- Metrics for VLMs: retrieval accuracy, BLEU, CIDEr, recall
- Visual-text alignment diagnostics
- Visualizing embedding spaces and misclassifications
Deployment and Use in Real Applications
- Exporting models for inference (TorchScript, ONNX)
- Integrating VLMs into pipelines or APIs
- Resource considerations and model scaling
Case Studies and Applied Scenarios
- Media analysis and content moderation
- Search and retrieval in e-commerce and digital libraries
- Multimodal interaction in robotics and autonomous systems
Summary and Next Steps
最低要求
- 了解深度學習在視覺和自然語言處理中的應用
- 具備PyTorch和基於transformer模型的經驗
- 熟悉多模態模型架構
目標受眾
- 電腦視覺工程師
- AI開發者
14 時間: