Role Overview
Operating at the intersection of AI innovation and institutional operations and reporting to the IT Manager, the AI Orchestration Engineer will be responsible for architecting and implementing orchestration pipelines that connect multiple AI models, internal APIs, and University databases. The role focuses on standardizing how AI agents interact with tools, APIs, and data sources using Model Context Protocol (MCP) standards across the University's technology ecosystem.
Duties and Responsibilities
- Pipeline Architecture: Architect and implement orchestration pipelines that connect multiple AI models, internal APIs, and University databases while integrating MCP standards.
- Multimodal Systems: Design and implement multimodal AI pipelines capable of processing text, images, audio, and documents to support diverse University use cases.
- Contextual Reasoning: Develop and maintain a knowledge graph layer integrated with Retrieval-Augmented Generation (RAG) pipelines to enhance contextual reasoning over interconnected University data.
- Inference Management: Deploy and manage on-premise or edge inference solutions for data-sensitive workflows where University data must remain within institutional infrastructure.
- AI Security: Implement AI safety, security, and guardrail frameworks to enforce output validation, prevent prompt-injection attacks, and ensure AI responses meet institutional and ethical standards.
- Multi-Agent Systems: Design multi-agent systems where different AI agents handle specialized tasks.
- Prompt Engineering: Develop sophisticated prompt strategies and "chain of thought" routines to ensure AI agents perform complex reasoning tasks reliably.
- Workflow Automation: Partner with University administrators and faculty to understand high-friction manual workflows that can be automated through AI.
- Other Duties: Perform any other duties as assigned by the supervisor or captured in the detailed job description.
Qualifications and Experience
- A minimum of a Bachelor's in Computer Science, Artificial Intelligence, Data Engineering, or a related field.
- 3+ years of experience in AI/ML engineering, data engineering, or software development with a focus on AI.
- Proven experience building and deploying LLM-powered applications in production environments.
- A strong portfolio or GitHub repository demonstrating work with RAG pipelines, agentic systems, or complex AI orchestrations.
- Experience working in an agile/scrum environment is mandatory.