About GlobalLogic - A Hitachi Group Company
GlobalLogic is a digital engineering leader, helping brands around the world design and build innovative products, platforms, and digital experiences for the modern world. We bring together experience design, engineering, and data to empower our customers with the digital business potential of the future. Headquartered in Silicon Valley, GlobalLogic operates design studios and engineering centers around the world, providing expert services to clients in the automotive, communications, financial services, healthcare and life sciences, manufacturing, media and entertainment, semiconductor, and technology industries. GlobalLogic is Hitachi, Ltd. (TSE: 6501) group company, and contributes to driving innovation through data and technology to improve the quality of life and a sustainable society.
What GlobalLogic offers to its employees:
Attractive projects: We focus on industries such as high tech, communications, media, healthcare, retail, and telecom. There are many great global brands that love what we build.
Collaborative environment: You can hone your skills by working with diverse and talented teams in an open and comfortable environment. There are also many opportunities to work with overseas teams and customers. Work-life balance: GlobalLogic values work-life balance and offers flexible work hours, work-from-home opportunities, paid vacation and holidays, and excellent benefits.
Career development: We have a dedicated Learning & Development team to provide training
Job Description
10–15 years in IT with at least 3+ years in AI/ML or Generative AI solution architecture, leading enterprise-scale deployments.
The Generative AI Architect will be responsible for designing, implementing, and optimizing enterprise-grade AI solutions using advanced Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) frameworks. This role involves building scalable, secure, and compliant GenAI architectures across multi-cloud environments, integrating with existing enterprise ecosystems, and ensuring model performance, observability, and reliability. The ideal candidate combines deep technical expertise with strategic thinking to guide teams in developing innovative, production-ready AI systems.
Key Skills – Must Have
- Programming: Python, JavaScript, Bash
- AI/ML Frameworks: LangChain, Hugging Face, Llama Index, TensorFlow, PyTorch
- RAG & LLM Expertise: Prompt Engineering, Context Retrieval, Vector Databases
- MLOps: Model orchestration, CI/CD integration, automated retraining pipelines
- Cloud Platforms: AWS, Azure, GCP (AI/ML Services & Cloud-native Design)
- Observability & Monitoring: OpenTelemetry, ELK, Prometheus, Grafana
- Security: Keycloak, OAuth, SAML, AI governance & responsible AI frameworks
- Architecture Patterns: Microservices, Event-Driven, API Management
- Development Frameworks: FastAPI, Flask, Django for AI service integration
- Versioning & Collaboration: GitLab CI/CD, JIRA, Confluence
Key Skills – Nice to Have:
- Experience with vector search and embeddings (FAISS, Pinecone, Weaviate, Chroma).
- Knowledge of multi-agent frameworks and AI workflow orchestration (e.g., LangGraph, CrewAI).
- Familiarity with PromptOps, GenAI observability tools, and bias mitigation.
- Background in Data Science, NLP, or Semantic Search.
- Exposure to microfrontend architectures and full-stack application design.
- Hands-on experience with container orchestration (Kubernetes, Docker).
- Familiarity with GitHub Copilot or AI-assisted coding.
- Strong technical documentation and solution storytelling abilities.
- Agentic Frameworks Dify, Langraph, Copilot Studio, OpenAI SDK
Job Responsibilities
Job Responsibilities:
- Architect and implement Generative AI and RAG-based solutions using frameworks like LangChain, Llama Index, and Hugging Face.
- Design LLM-powered systems for knowledge retrieval, automation, and conversational AI use cases.
- Integrate AI pipelines with existing enterprise applications, APIs, and microservices.
- Establish prompt engineering, evaluation, and monitoring frameworks to track model performance, drift, toxicity, and data leakage.
- Implement MLOps best practices for AI lifecycle management — training, deployment, monitoring, and retraining.
- Define and enforce security, compliance, and governance practices for AI data and model use.
- Collaborate with cross-functional teams (Data Science, DevOps, Security, Product) to deliver scalable AI products.
- Lead AI architecture reviews and define standards for performance, observability, and cost optimization.
- Drive multi-cloud and hybrid deployment strategies using AWS, Azure, and GCP for AI workloads.
- Provide technical leadership and mentorship to engineering teams, promoting adoption of AI and automation technologies.