Discover Agent Skills for data science & ml. Browse 61 skills for Claude, ChatGPT & Codex.
Facilitates the use of local Ollama models with the official OpenAI Python library and compatible AI orchestration frameworks.
Enforces E8 architecture standards and QIG purity protocols within the Pantheon development ecosystem.
Analyzes and visualizes complex network structures and graph data within Python environments.
Processes and analyzes massive tabular datasets exceeding available RAM using out-of-core DataFrames and lazy evaluation.
Manages FiftyOne dataset visualization and curation environments using Podman Quadlet containers with integrated MongoDB sidecars.
Manages ComfyUI instances for node-based Stable Diffusion image generation with automated GPU configuration and model management.
Builds process-based discrete-event simulations in Python for modeling complex systems with shared resources.
Optimizes vector database performance by balancing search latency, recall accuracy, and memory footprint.
Guides the development of high-performance ML and AI applications in Rust using memory-efficient patterns and GPU acceleration.
Optimizes LLM fine-tuning via advanced QLoRA patterns, hyperparameter tuning, and memory-efficient implementation strategies.
Builds high-quality fine-tuning datasets from literary works to train AI models in specific authorial voices and writing styles.
Diagnoses and mitigates AI agent performance failures caused by long-context attention loss, poisoning, and informational clash.
Designs and implements sophisticated multi-agent systems using supervisor, swarm, and hierarchical patterns to solve complex context management challenges.
Provides a clean, Pythonic interface for local LLM inference, chat completions, and model management using the official Ollama library.
Provides foundational expertise in context engineering to optimize AI agent performance and manage token usage effectively.
Implements Group Relative Policy Optimization for efficient LLM alignment and reinforcement learning from human feedback.
Streamlines parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and Unsloth to optimize memory and training speed.
Optimizes Large Language Models using Direct Preference Optimization to align behavior with preferred response pairs without explicit reward modeling.
Fine-tunes vision-language models like Pixtral and Ministral using Unsloth's FastVisionModel optimizations for faster training.
Transforms external RDF context into formal Belief-Desire-Intention (BDI) models to enable rational agency and explainable reasoning in AI agents.
Optimizes large language models for efficient inference and training by reducing memory footprint using advanced precision-shifting techniques like 4-bit and 8-bit quantization.
Architects and optimizes LLM-powered applications using structured methodologies, pipeline design, and agent-assisted development patterns.
Builds sophisticated LLM applications using LangChain for prompt management, model chaining, and structured output parsing.
Optimizes AI agent context through compression, masking, and strategic partitioning to maximize token efficiency and model performance.
Streamlines the development and training of reward models for RLHF pipelines and thinking quality scoring.
Evaluates LLM output quality and optimizes prompt templates using Evidently.ai metrics and LLM-as-a-Judge patterns.
Accelerates machine learning inference using Unsloth and vLLM backends for 2x faster token generation.
Provides technical blueprints and implementation patterns for the Transformer architecture to guide LLM development and fine-tuning.
Imports GGUF models from HuggingFace directly into Ollama for local inference and model management.
Fine-tunes large language models using PyTorch, HuggingFace, and Unsloth to adapt AI behaviors to specific datasets and tasks.
Scroll for more results...