Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4.0 - 7.0 years
1 - 2 Lacs
Chennai
Work from Office
This is an urgent and fast filling position - Need immediate joiners OR less than 1 month notice period AI/ML Engineer Location: Chennai Job Summary: We are looking for a Senior AI/ML Engineer to develop, optimize, and deploy machine learning models for real-world applications. You will work on end-to-end ML pipelines , collaborate with cross-functional teams, and apply AI techniques such as NLP, Computer Vision, and Time-Series Forecasting . This role offers opportunities to work on cutting-edge AI solutions while growing your expertise in model deployment and optimization. Role & responsibilities Key Responsibilities: Design, build, and optimize machine learning models for various business applications. Develop and maintain ML pipelines , including data preprocessing, feature engineering, and model training. Work with TensorFlow, PyTorch, Scikit-learn, and Keras for model development. Deploy ML models in cloud environments (AWS, Azure, GCP) and work with Docker/Kubernetes for containerization. Perform model evaluation, hyperparameter tuning, and performance optimization . Collaborate with data scientists, engineers, and product teams to deliver AI-driven solutions. Stay up to date with the latest advancements in AI/ML and implement best practices. Write clean, scalable, and well-documented code in Python or R. Technical Skills: Programming Languages: Proficiency in languages like Python. Python is particularly popular for developing ML models and AI algorithms due to its simplicity and extensive libraries like NumPy, Pandas, and Scikit-learn. Machine Learning Algorithms: Should have a deep understanding of supervised learning (linear regression, decision trees, SVM), unsupervised learning, and reinforcement learning. Data Management and Analysis: Skills in data cleaning, feature engineering, and data transformation are crucial. Deep Learning: Familiarity with neural networks, CNNs, RNNs, and other architectures is important. Machine Learning Frameworks and Libraries: Experience with TensorFlow, PyTorch, Keras, or Scikit-learn is valuable. Natural Language Processing (NLP): Familiarity with NLP techniques like word2vec, sentiment analysis, and summarization can be beneficial. Cloud Computing: Experience with cloud-based services like AWS SageMaker, Google Cloud AI Platform, or Microsoft Azure Machine Learning. Data Preprocessing: Skills in handling missing data, data normalization, feature scaling, and data transformation. Feature Engineering: Ability to create new features from existing data to improve model performance. Data Visualization: Familiarity with visualization tools like Matplotlib, Seaborn, Plotly, or Tableau. Containerization: Knowledge of containerization tools like Docker and Kubernetes. Databases : Understanding of relational databases (e.g., MySQL) and NoSQL databases (e.g., MongoDB). Data Warehousing: Familiarity with data warehousing concepts and tools like Amazon Redshift or Google BigQuery. Computer Vision: Understanding of computer vision concepts and techniques like object detection, segmentation, and image classification. Reinforcement Learning: Knowledge of reinforcement learning concepts and techniques like Q-learning and policy gradients.
Posted 1 week ago
8.0 - 12.0 years
8 - 12 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Job Summary What will you enjoy in this role Will focus on designing, developing, and supporting all our online data solutions This person will work closely with business Managers to design and build innovative solutions What you ll do We seek Software Engineers with experience building and scaling services in on-premises and cloud environments As a Lead Engineer in the Epsilon Attribution/Forecasting Product Development team, you will play a key role in implementing and optimizing advanced data processing solutions using Scala, Spark, and Hadoop You will collaborate with cross-functional teams to deploy scalable big data solutions on our on-premises and cloud infrastructure Your responsibilities will include building, scheduling, and maintaining complex workflows, as well as performing data integration and transformation tasks You will troubleshoot issues, document processes, and communicate technical concepts clearly to both technical and non-technical stakeholders Additionally, you will focus on continuously enhancing our attribution and forecasting engines, ensuring they effectively meet evolving business needs Strong written and verbal communication skills (in English) are required to facilitate work across multiple countries and time zones Good understanding of Agile Methodologies - SCRUM Qualifications Over 8 years of strong experience in Scala programming and extensive use of Apache Spark for developing and maintaining scalable big data solutions on both on-premises and cloud environments, particularly AWS and GCP Proficient in performance tuning of Spark jobs, optimizing resource usage, shuffling, partitioning, and caching for maximum efficiency Skilled in implementing scalable, fault-tolerant data pipelines with comprehensive monitoring and alerting Hands-on experience with Python for developing infrastructure modules Deep understanding of the Hadoop ecosystem, including HDFS, YARN, and MapReduce Proficient in writing efficient SQL queries for handling large volumes of data in various database systems Experienced in building, scheduling, and maintaining DAG workflows Familiar with data warehousing concepts and technologies Capable of taking end-to-end ownership in defining, developing, and documenting software objectives and requirements in collaboration with stakeholders Experienced with GIT or equivalent source control systems Proficient in developing and implementing unit test cases to ensure code quality and reliability and experienced in utilizing integration testing frameworks to validate system interactions Effective collaborator with stakeholders and teams to understand requirements and develop solutions Ability to work within tight deadlines, prioritize tasks effectively, and perform under pressure Experience in mentoring junior staff Advantageous to have experience on below: Hands-on with Databricks for unified data analytics, including Databricks Notebooks, Delta Lake, and Catalogues Proficiency in using the ELK (Elasticsearch, Logstash, Kibana) stack for real-time search, log analysis, and visualization Strong background in analytics, including the ability to derive actionable insights from large datasets and support data-driven decision-making Experience with data visualization tools like Tableau, Power BI, or Grafana Familiarity with Docker for containerization and Kubernetes for orchestratio
Posted 2 weeks ago
3.0 - 4.0 years
22 - 25 Lacs
Bengaluru
Work from Office
Key Responsibilities AI Model Deployment & Integration: Deploy and manage AI/ML models, including traditional machine learning and GenAI solutions (e.g., LLMs, RAG systems). Implement automated CI/CD pipelines for seamless deployment and scaling of AI models. Ensure efficient model integration into existing enterprise applications and workflows in collaboration with AI Engineers. Optimize AI infrastructure for performance and cost efficiency in cloud environments (AWS, Azure, GCP). Monitoring & Performance Management: Develop and implement monitoring solutions to track model performance, latency, drift, and cost metrics. Set up alerts and automated workflows to manage performance degradation and retraining triggers. Ensure responsible AI by monitoring for issues such as bias, hallucinations, and security vulnerabilities in GenAI outputs. Collaborate with Data Scientists to establish feedback loops for continuous model improvement. Automation & MLOps Best Practices: Establish scalable MLOps practices to support the continuous deployment and maintenance of AI models. Automate model retraining, versioning, and rollback strategies to ensure reliability and compliance. Utilize infrastructure-as-code (Terraform, CloudFormation) to manage AI pipelines. Security & Compliance: Implement security measures to prevent prompt injections, data leakage, and unauthorized model access. Work closely with compliance teams to ensure AI solutions adhere to privacy and regulatory standards (HIPAA, GDPR). Regularly audit AI pipelines for ethical AI practices and data governance. Collaboration & Process Improvement: Work closely with AI Engineers, Product Managers, and IT teams to align AI operational processes with business needs. Contribute to the development of AI Ops documentation, playbooks, and best practices. Continuously evaluate emerging GenAI operational tools and processes to drive innovation. Qualifications & Skills Education: Bachelors or Masters degree in Computer Science, Data Engineering, AI, or a related field. Relevant certifications in cloud platforms (AWS, Azure, GCP) or MLOps frameworks are a plus. Experience: 3+ years of experience in AI/ML operations, MLOps, or DevOps for AI-driven solutions. Hands-on experience deploying and managing AI models, including LLMs and GenAI solutions, in production environments. Experience working with cloud AI platforms such as Azure AI, AWS SageMaker, or Google Vertex AI. Technical Skills: Proficiency in MLOps tools and frameworks such as MLflow, Kubeflow, or Airflow. Hands-on experience with monitoring tools (Prometheus, Grafana, ELK Stack) for AI performance tracking. Experience with containerization and orchestration tools (Docker, Kubernetes) to support AI workloads. Familiarity with automation scripting using Python, Bash, or PowerShell. Understanding of GenAI-specific operational challenges such as response monitoring, token management, and prompt optimization. Knowledge of CI/CD pipelines (Jenkins, GitHub Actions) for AI model deployment. Strong understanding of AI security principles, including data privacy and governance considerations.
Posted 2 weeks ago
8.0 - 10.0 years
30 - 35 Lacs
Pune
Work from Office
Are you an expert at optimizing Cassandra for speed, reliability, and resilience? Do you excel at designing and scaling high-performance distributed databases? Were seeking a seasoned architect to play a key role in designing and maintaining robust database environments and leading integration initiatives across platforms. In this role, youll be at the forefront of building high-performing, scalable, and secure systems that support seamless data exchange leveraging APIs, event-driven flows, and real-time messaging. Your contributions will power automation, performance, and data-driven decision-making across the enterprise. What You'll Do Database Architecture & Operations: Architect, deploy, and monitor distributed database clusters (MySQL, Cassandra, MongoDB, PostgreSQL), ensuring high availability and performance. Perform advanced tuning, query optimization, and proactive issue resolution. Design and implement strategies for database backup, replication, disaster recovery, and data lifecycle management. Support MySQL systems for operational scalability and enterprise-level performance. Enterprise Integration & EDI Framework: Lead and support a robust EDI/EI framework to integrate backend systems through real-time APIs and asynchronous data pipelines. Standardize data contracts, routing logic, transformation processes, and messaging patterns across systems and partners. Replace legacy batch processing with scalable, event-based integrations using messaging frameworks (e.g., Kafka). API & Event-Driven Architecture: Design and optimize RESTful APIs and event-based services to enable real-time data exchange across internal and third-party systems. Build and manage ETL data flows into microservices or message queues abstracting from underlying source technologies. Data Pipeline Optimization & Automation: Identify bottlenecks in data pipelines and drive performance improvements through modeling and intelligent automation. Use AI/ML where applicable to structure unstructured data and enhance automation in transformation, reporting, and alerting workflows. Cloud & DevOps Enablement: Architect and manage cloud-native database deployments on AWS, Azure, or GCP using automation tools (Terraform, Ansible). Collaborate with DevOps teams to integrate database schema/version control and data integration into CI/CD pipelines. Ensure secure, compliant, and cost-optimized cloud architecture. Enterprise Programs & Collaboration: Contribute to strategic initiatives such as the NexGen Program, ensuring that database and integration designs align with performance, scalability, and long-term architectural goals. Work closely with application architects, developers, and QA teams to co-design and review data-driven solutions. What Youll Need Bachelors Degree in Computer Science or a related field. 8–10 years of experience in database architecture, administration, and integrations. Proven hands-on experience with Cassandra DB, MongoDB, MySQL, PostgreSQL. Certifications in database management or NoSQL databases are preferred. Expertise in database setup, performance tuning, and issue resolution. Strong knowledge of SQL/CQL, data modeling, and distributed systems architecture. Hands-on experience with multi-data-center deployments and monitoring tools such as New Relic. Experience in scripting and automation (Python, Java, Bash, Ansible). Experience managing Cassandra in cloud environments (AWS, GCP, or Azure). Familiarity with DevOps practices, containerization tools (Docker/Kubernetes), and data streaming platforms (Kafka, Spark). Experience standardizing enterprise integration approaches across business units. Solid understanding of data privacy, security, and regulatory compliance. Fluency in English (written and spoken) is required, as it is the corporate language across our global teams. Here’s What We Offer At Scan-IT, we pride ourselves on our vibrant and supportive culture. Join our dynamic, international team and take on meaningful responsibilities from day one. Innovative Environment : Explore new technologies in the transportation and logistics industry. Collaborative Culture : Work with some of the industry’s best in an open and creative environment. Professional Growth : Benefit from continuous learning, mentorship, and career advancement. Impactful Work : Enhance efficiency and drive global success. Inclusive Workplace : Thrive in a diverse and supportive environment. Competitive Compensation : Receive a salary that reflects your expertise. Growth Opportunities : Achieve your full potential with ample professional and personal development opportunities. Join Scan-IT and be part of a team that’s shaping the future of the transportation and logistics industry. Visit www.scan-it.com.sg and follow us on LinkedIn, Facebook and X.
Posted 2 weeks ago
7 - 12 years
1 - 2 Lacs
Chennai
Work from Office
This is an urgent and fast filling position - Need immediate joiners OR less than 1 month notice period Senior AI/ML Engineer Location: Chennai Experience: 7+ years Job Summary: We are looking for a Senior AI/ML Engineer to develop, optimize, and deploy machine learning models for real-world applications. You will work on end-to-end ML pipelines , collaborate with cross-functional teams, and apply AI techniques such as NLP, Computer Vision, and Time-Series Forecasting . This role offers opportunities to work on cutting-edge AI solutions while growing your expertise in model deployment and optimization. Role & responsibilities Preferred candidate profile Design, build, and optimize machine learning models for various business applications. Develop and maintain ML pipelines , including data preprocessing, feature engineering, and model training. Work with TensorFlow, PyTorch, Scikit-learn, and Keras for model development. Deploy ML models in cloud environments (AWS, Azure, GCP) and work with Docker/Kubernetes for containerization. Perform model evaluation, hyperparameter tuning, and performance optimization . Collaborate with data scientists, engineers, and product teams to deliver AI-driven solutions. Stay up to date with the latest advancements in AI/ML and implement best practices. Write clean, scalable, and well-documented code in Python or R.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2