Machine Learning Operations (MLOps) Engineer (GCP)
As a full spectrum cloud integrator, we assist hundreds of companies to realize the value, efficiency, and productivity of the cloud. We take customers on their journey to enable, operate, and innovate using cloud technologies – from migration strategy to operational excellence and immersive transformation.
If you like a challenge, you’ll love it here, because we solve complex business problems every day, building and promoting great technology solutions that impact our customers’ success. The best part is, we’re committed to you and your growth, both professionally and personally.
We are looking for a seasoned Machine Learning Operations (MLOps) Engineer to build and optimize machine learning platforms. This role requires deep expertise in machine learning engineering and infrastructure, with a strong focus on developing scalable inference systems. Proven experience in building and deploying ML platforms in production environments is essential. This remote position also requires excellent communication skills and the ability to independently tackle complex challenges with innovative solutions.
If you get a thrill working with cutting-edge technology and love to help solve customers’ problems, we’d love to hear from you. It’s time to rethink the possible. Are you ready?
What you will be doing:
Develop CI/CD workflows for ML models and data pipelines using tools like Cloud Build, GitHub Actions, or Jenkins.
Automate model training, validation, and deployment across development, staging, and production environments.
Monitor and maintain ML models in production using Vertex AI Model Monitoring, logging (Cloud Logging), and performance metrics.
Ensure reproducibility and traceability of experiments using ML metadata tracking tools like Vertex AI Experiments or MLflow.
Manage model versioning and rollbacks using Vertex AI Model Registry or custom model management solutions.
Collaborate with data scientists and software engineers to translate model requirements into robust and scalable ML systems.
Optimize model inference infrastructure for latency, throughput, and cost efficiency using GCP services such as Cloud Run, Kubernetes Engine (GKE), or custom serving frameworks.
Implement data and model governance policies, including auditability, security, and access control using IAM and Cloud DLP.
Stay current with evolving GCP MLOps practices, tools, and frameworks to continuously improve system reliability and automation
Qualifications and skills:
Apply to this Job