Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 - 3.0 years
2 - 6 Lacs
Mumbai
Work from Office
Data Validation (DV) Specialist (Using SPSS) - Analyst Job Description: Core Responsibilities: Perform data quality checks and validation on market research datasets Develop and execute scripts and automated processes to identify data anomalies. Collaborate with the Survey Programming team to review survey questionnaires and make recommendations for efficient programming and an optimal layout that enhances user experience. Investigate and document data discrepancies, working with survey programming team/data collection vendors as needed. Create and maintain detailed data documentation and validation reports. Collaborate with Survey Programmers and internal project managers to understand data processing requirements and provide guidance on quality assurance best practices. Provide constructive feedback and suggestions for improving the quality of data, aiming to enhance overall survey quality. Automate data validation processes where possible to enhance efficiency and reduce time spent on repetitive data validation tasks. Maintain thorough documentation of findings and recommendations to ensure transparency and consistency in quality practices. Actively participate in team meetings to discuss project developments, quality issues, and improvement strategies, fostering a culture of continuous improvement. Qualification: Bachelor s degree in computer science, Information Technology, Statistics, or a related field. At least 2+ years of experience in data validation process. Familiar with data validation using SPSS, Dimension, Quantum platform or similar tools A proactive team player who thrives in a fast-paced environment and enjoys repetitive tasks that contribute to project excellence. Programming knowledge in a major programming language such as R, JavaScript, or Python, with an interest in building automation scripts for data validation. Excellent problem-solving skills and a willingness to learn innovative quality assurance methodologies. A desire for continuous improvement in processes, focusing on creating efficiencies that lead to scalable and high-quality data processing outcomes.
Posted 2 weeks ago
3.0 - 7.0 years
18 - 20 Lacs
Bengaluru
Work from Office
Lead design, architecture, and development of scalable full-stackapplications using Node.js and Python Drive end-to-end delivery of Gen AI features, including integrating OpenAI APIs, designing RAG pipelines, and applying prompt engineering Architect and implement chatbot platforms and conversational flows using third-party tools or custom frameworks Mentor and guide junior/mid-level developers through code reviews, pair programming, and technical discussions Collaborate with cross-functional teams (UX, AI/ML, product) to align technology with business goals Build robust APIs and data processing layers for backend systems, ensuring performance and reliability Contribute hands-on to UI development using HTML, CSS, JavaScript, and modern frameworks Enforce best practices in coding, testing, and agile delivery Manage technical risks, escalate issues early, and drive resolution in coordination with stakeholders u200b Must-Have Skills: 8+ years of experience in full stack development with Node.js, Python, and web technologies Proven experience in building and scaling chatbots and Gen AI applications on Azure. Deep understanding of OpenAI APIs, prompt engineering, and LLM integration Hands-on experience designing or implementing Retrieval-Augmented Generation (RAG) systems Strong knowledge of REST APIs, SQL/NoSQL databases, and cloud-native development Solid experience with frontend frameworks (e.g., React, Vue.js) and UI best practices Excellent communication and leadership skills, with a collaborative mindset Nice-to-Have Skills:
Posted 2 weeks ago
2.0 - 5.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Sanas is revolutionizing the way we communicate with the world s first real-time algorithm, designed to modulate accents, eliminate background noises, and magnify speech clarity. Pioneered by seasoned startup founders with a proven track record of creating and steering multiple unicorn companies, our groundbreaking GDP-shifting technology sets a gold standard. Sanas is a 200-strong team, established in 2020. In this short span, we ve successfully secured over $100 million in funding. Our innovation have been supported by the industry s leading investors, including Insight Partners, Google Ventures, Quadrille Capital, General Catalyst, Quiet Capital, and other influential investors. Our reputation is further solidified by collaborations with numerous Fortune 100 companies. With Sanas, you re not just adopting a product; you re investing in the future of communication. We re looking for a sharp, hands-on Data Engineer to help us build and scale the data infrastructure that powers cutting-edge audio and speech AI products. You ll be responsible for designing robust pipelines, managing high-volume audio data, and enabling machine learning teams to access the right data fast. As one of the first dedicated data engineers on the team, youll play a foundational role in shaping how we handle data end-to-end, from ingestion to training-ready features. Youll work closely with ML engineers, research scientists, and product teams to ensure data is clean, accessible, and structured for experimentation and production. Key Responsibilities : Build scalable, fault-tolerant pipelines for ingesting, processing, and transforming large volumes of audio and metadata. Design and maintain ETL workflows for training and evaluating ML models, using tools like Airflow or custom pipelines. Collaborate with ML research scientists to make raw and derived audio features (e.g., spectrograms, MFCCs) efficiently available for training and inference. Manage and organize datasets, including labeling workflows, versioning, annotation pipelines, and compliance with privacy policies. Implement data quality, observability, and validation checks across critical data pipelines. Help optimize data storage and compute strategies for large-scale training. Qualifications : 2-5 years of experience as a Data Engineer, Software Engineer, or similar role with a focus on data infrastructure. Proficient in Python, SQL, and working with distributed data processing tools (e.g., Spark, Dask, Beam). Experience with cloud data infrastructure (AWS/GCP), object storage (e.g.,S3), and data orchestration tools. Familiarity with audio data and its unique challenges (large file sizes, time-series features, metadata handling) is a strong plus. Comfortable working in a fast-paced, iterative startup environment where systems are constantly evolving. Strong communication skills and a collaborative mindset you ll be working cross-functionally with ML, infra, and product teams. Nice to Have : Experience with data for speech models like ASR, TTS, or speaker verification. Knowledge of real-time data processing (e.g., Kafka, WebSockets, or low-latency APIs). Background in MLOps, feature engineering, or supporting model lifecycle workflows. Experience with labeling tools, audio annotation platforms, or human-in-the-loop systems. Joining us means contributing to the world s first real-time speech understanding platform revolutionizing Contact Centers and Enterprises alike. Our technology empowers agents, transforms customer experiences, and drives measurable growth. But this is just the beginning. Youll be part of a team exploring the vast potential of an increasingly sonic future
Posted 2 weeks ago
8.0 - 9.0 years
25 - 30 Lacs
Mumbai
Work from Office
Data Validation (DV) Specialist (Using SPSS) - Team Leader Job Description: Perform data quality checks and validation on market research datasets Develop and execute scripts and automated processes to identify data anomalies. Collaborate with the Survey Programming team to review survey questionnaires and make recommendations for efficient programming and an optimal layout that enhances user experience. Investigate and document data discrepancies, working with survey programming team/data collection vendors as needed. Collaborate with Survey Programmers and internal project managers to understand survey requirements and provide guidance on quality assurance best practices. Provide constructive feedback and suggestions for improving the quality of data, aiming to enhance overall survey quality. Automate data validation processes where possible to enhance efficiency and reduce time spent on repetitive data validation tasks. Maintain thorough documentation of findings and recommendations to ensure transparency and consistency in quality practices Actively participate in team meetings to discuss project developments, quality issues, and improvement strategies, fostering a culture of continuous improvement Manage the pipeline and internal/external stakeholder expectations Train and mentor junior team members Qualification: Bachelor s degree in computer science, Information Technology, Statistics, or a related field. At least 4+ years of experience in data validation process. Familiar with data validation using SPSS, Dimension, Quantum platform or similar tools A proactive team player who thrives in a fast-paced environment and enjoys repetitive tasks that contribute to project excellence. Programming knowledge in a major programming language such as R, JavaScript, or Python, with an interest in building automation scripts for data validation. Excellent problem-solving skills and a willingness to learn innovative quality assurance methodologies. A desire for continuous improvement in processes, focusing on creating efficiencies that lead to scalable and high-quality data processing outcomes. Location: Mumbai Brand: Merkle Time Type: Full time Contract Type: Permanent
Posted 2 weeks ago
1.0 - 2.0 years
5 - 6 Lacs
Mumbai
Work from Office
Data Validation (DV) Specialist (Using SPSS) - Analyst Job Description: Core Responsibilities: Perform data quality checks and validation on market research datasets Develop and execute scripts and automated processes to identify data anomalies. Collaborate with the Survey Programming team to review survey questionnaires and make recommendations for efficient programming and an optimal layout that enhances user experience. Investigate and document data discrepancies, working with survey programming team/data collection vendors as needed. Create and maintain detailed data documentation and validation reports. Collaborate with Survey Programmers and internal project managers to understand data processing requirements and provide guidance on quality assurance best practices. Provide constructive feedback and suggestions for improving the quality of data, aiming to enhance overall survey quality. Automate data validation processes where possible to enhance efficiency and reduce time spent on repetitive data validation tasks. Maintain thorough documentation of findings and recommendations to ensure transparency and consistency in quality practices. Actively participate in team meetings to discuss project developments, quality issues, and improvement strategies, fostering a culture of continuous improvement. Qualification: Bachelor s degree in computer science, Information Technology, Statistics, or a related field. At least 2+ years of experience in data validation process. Familiar with data validation using SPSS, Dimension, Quantum platform or similar tools A proactive team player who thrives in a fast-paced environment and enjoys repetitive tasks that contribute to project excellence. Programming knowledge in a major programming language such as R, JavaScript, or Python, with an interest in building automation scripts for data validation. Excellent problem-solving skills and a willingness to learn innovative quality assurance methodologies. A desire for continuous improvement in processes, focusing on creating efficiencies that lead to scalable and high-quality data processing outcomes. Location: Mumbai Brand: Merkle Time Type: Full time Contract Type: Permanent
Posted 2 weeks ago
5.0 - 7.0 years
25 - 30 Lacs
Pune, Gurugram, Bengaluru
Work from Office
About Us: Job Summary : We are seeking a highly skilled individual to join our team as a Data Engineering/Operations Specialist. This role will be responsible for maintaining and evolving data pipeline architecture, orchestrating new data sources for further processing, and ensuring the up-to-date documentation of pipelines and data feeds. Key Responsibilities: *Maintain, upgrade and evolve data pipeline architectures to ensure optimal performance and scalability. *Orchestrate the integration of new data sources into existing pipelines for further processing and analysis. *Keep documentation up to date for pipelines and data feeds to facilitate smooth operations and collaboration within the team. *Collaborate with cross-functional teams to understand data requirements and optimize pipeline performance accordingly. *Troubleshoot and resolve any issues related to pipeline architecture and data processing. Role Requirements and Qualifications: *Experience with Cloud platform for deployment and management of data pipelines. *Familiarity with AWS / Azure for efficient data processing workflows. *Experience with constructing FAIR data products is highly desirable. *Basic understanding of computational clusters to optimize pipeline performance. *Prior experience in data engineering or operations roles, preferably in a cloud-based environment. *Proven track record of successfully maintaining and evolving data pipeline architectures. *Strong problem-solving skills and ability to troubleshoot technical issues independently. *Excellent communication skills to collaborate effectively with cross-functional teams. Why Join Us: *Opportunities to work on transformative projects, cutting-edge technology and innovative solutions with leading global firms across industry sectors. *Continuous investment in employee growth and professional development with a strong focus on up & re-skilling. *Competitive compensation & benefits, ESOPs and international assignments. *Supportive environment with healthy work-life balance and a focus on employee well-being. *Open culture that values diverse perspectives, encourages transparent communication and rewards contributions. How to Apply: If you are interested in joining our team and meet the qualifications listed above, please apply and submit your resume highlighting why you are the ideal candidate for this position.
Posted 2 weeks ago
10.0 - 15.0 years
20 - 25 Lacs
Bengaluru
Work from Office
6+ years of hands-on experience with Kubernetes, specifically Amazon EKS (required): Critical for designing and managing enterprise-scale container orchestration with advanced networking, security, and multi-tenancy. Deep expertise in AWS serverless technologies (required): Including Lambda, API Gateway, EventBridge, and Step Functions, to lead scalable, event-driven architecture initiatives. 4+ years of experience with Terraform (required): Required for architecting Infrastructure as Code strategies, including advanced module design, state management, and CI/CD integration. 2+ years of experience with ArgoCD and GitOps practices (required): Essential for implementing declarative, version-controlled infrastructure and application deployments across multi-cluster environments. Expert-level proficiency in GitHub Actions and branching strategies (required): Needed to design sophisticated CI/CD pipelines and manage complex workflow orchestration, release flows, and compliance automation. Advanced scripting skills in Python, .NET, or similar languages (required): Important for developing automation tools and integrating infrastructure with operational workflows. Experience with progressive delivery patterns (required): Including blue-green deployments, canary releases, and feature flagging to support safe and controlled rollouts. Knowledge of site reliability engineering (SRE) principles (required): To establish SLIs/SLOs, drive observability, and ensure platform reliability and performance. Proven leadership in DevOps or platform engineering (required): Demonstrated ability to mentor teams, lead incident response, and drive technical strategy aligned with business goals. Strong understanding of cloud security and compliance frameworks (preferred): Such as SOC2, HIPAA, and PCI, to ensure infrastructure meets regulatory and governance standards. Experience with Databricks and MLOps workflows (preferred): Valuable for teams working in data-intensive environments and supporting data platform governance. Advanced knowledge of Kafka, MongoDB and Azure DevOps (preferred): Useful for managing event-driven architectures and multi-cloud Kubernetes environments. Education bachelors degree in Computer Science or equivalent experience Responsibilities Lead cloud infrastructure strategy and automation with Kubernetes, AWS, GitOps, and CI/CD to drive scalable, secure DevOps solutions. Architect and manage enterprise-scale container platforms using Kubernetes on cloud infrastructure, ensuring secure, multi-tenant, and highly available environments. Lead the design and implementation of serverless architectures using cloud-native services to support scalable, event-driven applications. Develop and maintain Infrastructure as Code strategies using tools like Terraform, including advanced module design, state management, and integration with deployment pipelines. Implement and champion GitOps practices using declarative tools to manage infrastructure and application deployments across multiple environments. Design and optimize complex continuous integration and delivery pipelines using advanced workflow orchestration, reusable components, and secure release strategies. Automate testing, security scanning, and compliance validation across development, staging, and production environments to ensure operational excellence. Establish and monitor service-level indicators and objectives to drive reliability engineering practices and ensure platform performance and resilience. Lead incident response and root cause analysis efforts to resolve critical issues and implement preventive measures that improve system stability. Mentor and guide junior engineers and peers by sharing best practices, conducting code reviews, and fostering a culture of continuous learning and improvement. Collaborate with cross-functional teams including product, security, and business stakeholders to align infrastructure strategy with organizational goals. Drive strategic planning and technical vision for the evolution of the DevOps toolchain, cloud infrastructure, and platform architecture. Promote a culture of DevOps excellence by advocating for automation, observability, and continuous improvement across the engineering organization. About the team Our Data Estate DevOps team is responsible for enabling the scalable, secure, and automated infrastructure that powers Moody s enterprise data platform. We ensure the seamless deployment, monitoring, and performance of data pipelines and services that deliver curated, high-quality data to internal and external consumers. We contribute to Moody s by: Accelerating data delivery and operational efficiency through automation, observability, and infrastructure-as-code practices that support near real-time data processing and remediation. Supporting data integrity and governance by enabling traceable, auditable, and resilient systems that align with regulatory compliance and GenAI readiness. Empowering innovation and analytics by maintaining a modular, interoperable platform that integrates internal and third-party data sources for downstream research models, client workflows, and product applications.
Posted 2 weeks ago
3.0 - 6.0 years
2 - 6 Lacs
Chennai
Work from Office
AWS Lambda Glue Kafka/Kinesis RDBMS Oracle, MySQL, RedShift, PostgreSQL, Snowflake Gateway Cloudformation / Terraform Step Functions Cloudwatch Python Pyspark Job role & responsibilities: Looking for a Software Engineer/Senior Software engineer with hands on experience in ETL projects and extensive knowledge in building data processing systems with Python, pyspark and Cloud technologies(AWS). Experience in development in AWS Cloud (S3, Redshift, Aurora, Glue, Lambda, Hive, Kinesis, Spark, Hadoop/EMR) Required Skills: Amazon Kinesis, Amazon Aurora, Data Warehouse, SQL, AWS Lambda, Spark, AWS QuickSight Advanced Python Skills Data Engineering ETL and ELT Skills Experience of Cloud Platforms (AWS or GCP or Azure) Mandatory skills- Datawarehouse, ETL, SQL, Python, AWS Lambda, Glue, AWS Redshift.
Posted 2 weeks ago
4.0 - 9.0 years
6 - 11 Lacs
Chennai
Work from Office
We are seeking a skilled Lead AWS Data Engineer with strong programming and SQL skills to join our team. The ideal candidate will have hands-on experience with AWS Data Analytics services and a basic understanding of general AWS services. Additionally, prior experience with Oracle and Postgres databases and secondary skills in Python and Azure DevOps will be an advantage. Key Responsibilities: Design, develop, and optimize data pipelines using AWS Data Analytics services such as RDS, DMS, Glue, Lambda, Redshift, and Athena . Implement data migration and transformation processes using AWS DMS and Glue . Work with SQL (Oracle & Postgres) to query, manipulate, and analyse large datasets. Develop and maintain ETL/ELT workflows for data ingestion and transformation. Utilize AWS services like S3, IAM, CloudWatch, and VPC to ensure secure and efficient data operations. Write clean and efficient Python scripts for automation and data processing. Collaborate with DevOps teams using Azure DevOps for CI/CD pipelines and infrastructure management. Monitor and troubleshoot data workflows to ensure high availability and performance. Preferred Qualifications: AWS certifications in Data Analytics, Solutions Architect, or DevOps. Experience with data warehousing concepts and data lake implementations. Hands-on experience with Infrastructure as Code (IaC) tools like Terraform or CloudFormation.
Posted 2 weeks ago
4.0 - 9.0 years
37 - 45 Lacs
Bengaluru
Work from Office
You we're made to do this work: designing new technologies, diving into data, optimizing digital experiences, and constantly developing better, faster ways to get results. You want to be part of a performance culture dedicated to building technology for a purpose that matters. You want to work in an environment that promotes sustainability, inclusion, we'llbeing, and career development. In this role, you'll help us deliver better care for billions of people around the world. It starts with YOU. In this role, you will: Understand the business problem, analyse the data, and define the success criteria Work with engineering team and architecture teams for data identification and collection, harmonization, and cleansing for the data analysis and preparation Responsible for analysing and identifying appropriate algorithms for the defined problem statement Analyse additional data inputs and methods that would improve the results of the models and look for opportunities Responsible for building models that are interpretable, explainable and sustainable at scale and meets the business needs Build visualizations and demonstrate the results of the model to the stakeholders and leadership team Must be conversant with Agile methodologies and tools and have a track record of delivering products in a production environment Explore and recommend new tools and processes which can be leveraged across the data preparation pipeline for capabilities and efficiencies Ensure that our development and deployment are tightly integrated to each other to maximize the deployment user experience Curator for all code and binary artifact repositories (containers, compiled code). Work with AI strategists, DevOps, data engineers/SMEs from domain to understand how data availability and quality affects model performance Develop and disseminate innovative techniques, processes and tools, that can be leveraged across the AI product development lifecycle. About You You perform at the highest level possible, and you appreciate a performance culture fueled by authentic caring. You want to be part of a company actively dedicated to sustainability, inclusion, we'llbeing, and career development. You love what you do, especially when the work you do makes a difference. At Kimberly-Clark, we're constantly exploring new ideas on how, when, and where we can best achieve results. When you join our team, you'll experience Flex That Works: flexible (hybrid) work arrangements that empower you to have purposeful time in the office and partner with your leader to make flexibility work for both you and the business. In one of our technical roles, you'll focus on winning with consumers and the market, while putting safety, mutual respect, and human dignity at the center. To succeed in this role, you will need the following qualifications: Experience building ML models in a modern cloud-based architecture 4+ years of demonstrated experience in developing highly scalable, reliable, and resilient multi-tenanted ML algorithms for large scale use cases in S upply chain(logistics, manufacturing, procurement etc.) Sales and Marketing, Revenue management and other business areas. 4+ years of demonstrated experience in developing ML pipelines on various frameworks on AWS, Azure, or similar cloud platforms Proficient and experienced in Python and SQL for data analysis and exploration Experience in cloud based solutioning and managing enterprise grade end to end machine learning solutions with automated pipelines for data processing, feature engineering, training, evaluation, deployment, integration and monitoring Hands-on experience with Docker, Kubernetes and the cloud infra like Azure, AWS, GCP and on machine learning tools like Azure Machine Learning, Amazon Sagemaker, MLFlow, KubeFlow, etc in production Experience in end-to-end AI life cycle including Data science, technical experience in AI, machine learning, predictive modelling, Natural Language Processing (NLP), Deep learning, advanced analytics and statistics modelling, Python, SQL, Azure/AWS/GCP Experience on model monitoring, explainability, model management, version tracking & storage and AI governance Deployment of models, Docker, ML Pipelines, Azure Machine Learning Knowledge on SQL/NoSQL databases, microservices and REST APIs, docker Strong Knowledge on source code management, configuration management, CI/CD, security and performance Ability to look ahead to identify opportunities and thrive in a culture of innovation Self-starter who can see the big picture, and prioritize your work to make the largest impact on the business and customer s vision and requirements Experience in building, testing, and deploying code to run on Azure cloud datalake A can-do attitude in anticipating and resolving problems to help your team to achieve its goals Must have experience in Agile development methods
Posted 2 weeks ago
4.0 - 9.0 years
9 - 14 Lacs
Bengaluru
Work from Office
You as a DevOps engineer share your expertise in implementing, maintaining automated build, deployment pipelines, optimizing build times and resource usage. You will contribute in CI/CD methodologies and Git branching strategies. You have: Graduate or Postgraduate in Engineering with 4+ years of experience in DevOps and CICD pipelines. Experience in Docker, Kubernetes (EKS), OpenShift. Software development experience using Python / Groovy / Shell. Experience in designing and implementing CI/CD pipelines. Experience working with Git technology and understanding of Git branching strategies. It would be nice if you also have: Knowledge to AI/ML algorithms. Knowledge inYocto, Jenkins, Gerrit, distCC and ZUUL. You will leverage experience in Yocto, Jenkins, Gerrit, and other build tools to streamline and optimize the build process. You will proactively monitor build pipelines, investigate failures, and implement solutions to improve reliability and efficiency. You will utilize AI/ML algorithms to automate and optimize data-driven pipelines, improving data processing and analysis. You willwork closely with the team to understand their needs and contribute to a collaborative and efficient work environment. Actively participate in knowledge sharing sessions and contribute to the team's overall understanding of best practices and innovative solutions. You will learn a culture of continuous improvement, constantly seeking ways to optimize processes and enhance the overall effectiveness of the team.
Posted 2 weeks ago
0.0 - 1.0 years
1 - 2 Lacs
Mumbai Suburban, Mumbai (All Areas)
Work from Office
Datamatics one of the leading IT & BPM service provider is currently hiring for Data management associates for one of the leading International client. Criteria : Completed Graduation from Commerce stream (2025 awaiting results can apply) Good communication skills (Written & Spoken) Good in email drafting skills Good in basic of MS Excel & MS Word Male candidates preferred Recent Graduates- Freshers preferred Only Immediate joiners preferred Shifts : 3:00 pm to 11:30 pm (Fixed shifts) Working days : Monday to Friday (Weekends off) Job Location : MIDC, Andheri (East) Salary : In-hand 13,500/- net p.m. ( Additional benefits - PF, Bonus, ESIC & OT if applicable) What are we offering : Excellent opportunity to work in one of growing organization Working under tenured & experience supportive leaders Kick start Industry experience for Freshers Involved in various employee connect, wellness and fun initiative activities Interested candidates can apply on the job post or contact on below details Francis Fernandes : francis.fernandes@datamatics.com | 8450979317
Posted 2 weeks ago
15.0 - 20.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Spark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of the projects you are involved in. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of milestones. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark.- Strong understanding of distributed computing principles.- Experience with data processing frameworks and tools.- Familiarity with cloud platforms and services.- Ability to write efficient and scalable code. Additional Information:- The candidate should have minimum 5 years of experience in Apache Spark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
15.0 - 20.0 years
18 - 22 Lacs
Hyderabad
Work from Office
Project Role : Data Platform Architect Project Role Description : Architects the data platform blueprint and implements the design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Microsoft Azure Data Services Good to have skills : Microsoft SQL Server, Python (Programming Language), Microsoft Azure DatabricksMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Architect, you will be responsible for architecting the data platform blueprint and implementing the design, which includes various data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure seamless integration between systems and data models, while also addressing any challenges that arise during the implementation process. You will engage in discussions with stakeholders to gather requirements and provide insights that drive the overall architecture of the data platform, ensuring it meets the needs of the organization effectively. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Develop and maintain documentation related to data architecture and design. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Data Services.- Good To Have Skills: Experience with Microsoft Azure Databricks, Python (Programming Language), Microsoft SQL Server.- Strong understanding of data modeling techniques and best practices.- Experience with cloud-based data storage solutions and data processing frameworks.- Familiarity with data governance and compliance standards. Additional Information:- The candidate should have minimum 7.5 years of experience in Microsoft Azure Data Services.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
2.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Spark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. You will play a crucial role in developing solutions that align with organizational goals and enhance operational efficiency. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to analyze business requirements and translate them into technical solutions.- Develop and implement software solutions using Apache Spark to enhance application functionality.- Troubleshoot and debug applications to optimize performance and ensure seamless operation.- Stay updated with industry trends and best practices to continuously improve application development processes.- Provide technical guidance and support to junior team members to foster skill development. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark.- Strong understanding of big data processing and distributed computing.- Experience with data processing frameworks like Hadoop and Spark SQL.- Hands-on experience in developing scalable and efficient applications using Apache Spark.- Knowledge of programming languages such as Scala or Python. Additional Information:- The candidate should have a minimum of 2 years of experience in Apache Spark.- This position is based at our Pune office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 2 weeks ago
5.0 - 10.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Spark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be involved in designing, building, and configuring applications to meet business process and application requirements. Your typical day will revolve around creating innovative solutions to address various business needs and ensuring seamless application functionality. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Conduct code reviews and ensure coding standards are met- Implement best practices for application design and development Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark- Strong understanding of big data processing- Experience with distributed computing frameworks- Hands-on experience in developing scalable applications- Knowledge of data processing and transformation techniques Additional Information:- The candidate should have a minimum of 5 years of experience in Apache Spark- This position is based at our Hyderabad office- A 15 years full-time education is required Qualification 15 years full time education
Posted 2 weeks ago
4.0 - 6.0 years
12 - 16 Lacs
Chennai
Work from Office
We are seeking a skilled Data Engineer who can function as a Data Architect, designing scalable data pipelines, table structures, and ETL workflows. The ideal candidate will be responsible for recommending cost-effective and high-performance data architecture solutions, collaborating with cross-functional teams to enable efficient analytics and data science initiatives. Key Responsibilities: Design and implement ETL workflows, data pipelines, and table structures to support business analytics and data science. Optimize data storage, retrieval, and processing for cost-efficiency and high performance. Collaborate with Analytics and Data Science teams for feature engineering and KPI computations. Develop and maintain data models for structured and unstructured data. Ensure data quality, integrity, and security across systems. Work with cloud platforms (AWS/ Azure/ GCP) to design and manage scalable data architectures. Technical Skills Required: SQL & Python Strong proficiency in writing optimized queries and scripts. PySpark Hands-on experience with distributed data processing. Cloud Technologies (AWS/ Azure/ GCP) Experience with cloud-based data solutions. Spark & Airflow Experience with big data frameworks and workflow orchestration. Gen AI (Preferred) Exposure to generative AI applications is a plus. Preferred Qualifications: Experience in data modeling, ETL optimization, and performance tuning. Strong problem-solving skills and ability to work in a fast-paced environment. Prior experience working with large-scale data processing.
Posted 2 weeks ago
5.0 - 10.0 years
20 - 25 Lacs
Bengaluru
Work from Office
The Service Reliability Engineer (SRE) role in Apple Services Engineering requires a mix of strategic engineering and design along with hands-on, technical work. This SRE will configure, tune, and fix multi-tiered systems to achieve optimal application performance, stability and availability.We manage jobs as well as applications on bare-metal and cloud computing platforms to deliver data processing for many of Apple s global products. Our teams work with exabytes of data, petabytes of memory, and tens of thousands of jobs to enable predicable and performant data analytics enabling features in Apple Music, TV+, Appstore and other world class products.If you love designing, running systems that will impact millions of users, then this is the place for you!- Support Java-based applications & Spark/Flink jobs on Baremetal, AWS & Kubernetes- Ability to understand the application requirements (Performance, Security, Scalability, etc.) and assess the right services/topology on AWS, Baremetal & Kubernetes- Build automation to enable self-healing systems- Build tools to monitor high performance & alert the low-latency applications- Ability to troubleshoot application-specific, core network, system & performance issues.- Involvement in challenging and fast paced projects supporting Apples business by delivering innovative solutions.- Monitor production, staging, test and development environments for a myriad of applications in an agile and dynamic organisation. BS degree in computer science or equivalent field with 5+ years or MS degree with 3+ years experience, or equivalent. At least 5 years in a Site Reliability Engineering (SRE), DevOps role 5+ years of running services in a large-scale *nix environment Understanding of SRE principles and goals along with prior on-call experience Extensive experience in managing applications on AWS & Kubernetes Deep understanding and experience in one or more of the following - Hadoop, Spark, Flink, Kubernetes, AWS Preferred Qualifications Fast learner with excellent analytical problem solving and interpersonal skills Experience supporting Java applications Experience with Big Data Technologies Experience working with geographically distributed teams and implementing high level projects and migrations Strong communication skills and ability to deliver results on time with high quality
Posted 2 weeks ago
10.0 - 15.0 years
3 - 7 Lacs
Sangli
Work from Office
Roles and Responsibilities: Design, develop, and maintain backend services and APIs using Python and FastAPILead and mentor a team of developers, providing technical guidance and support.Conduct code reviews to ensure code quality, performance, and adherence to coding standards.Collaborate with cross-functional teams to define architectural strategies and roadmap.Implement best practices for code refactoring, optimization, and scalability.Drive architectural design discussions and decisions based on business requirements and technology trends.Optimize bulk data processing pipelines for efficiency and reliability.Troubleshoot and resolve complex technical issues in a timely manner.Stay updated with industry trends and emerging technologies to continuously improve development processes. Qualifications and Experience: Bachelor's or Master's degree in Computer Science, Software Engineering, or related field Proven experience as a Senior Backend Developer or Solution Architect with 10+ years in software development Strong knowledge of object-oriented programming, data structures, algorithms, and design patterns Hands-on experience with FastAPI or similar web frameworks. Familiarity with SQL and NoSQL databases Expertise in bulk data processing techniques and tools. Solid understanding of software architecture principles and patterns. Excellent problem-solving and debugging skills Strong communication and collaboration skills Ability to work in a fast-paced, dynamic environment Passion for learning new technologies and eagerness to stay up-to-date with emerging trends in back-end development.
Posted 2 weeks ago
8.0 - 13.0 years
1 - 4 Lacs
Pune
Work from Office
Roles & Responsibilities: Provides expert level development system analysis design and implementation of applications using AWS services specifically using Python for Lambda Translates technical specifications and/or design models into code for new or enhancement projects (for internal or external clients). Develops code that reuses objects is well-structured includes sufficient comments and is easy to maintain Provides follow up Production support when needed. Submits change control requests and documents. Participates in design code and test inspections throughout the life cycle to identify issues and ensure methodology compliance. Participates in systems analysis activities including system requirements analysis and definition e.g. prototyping. Participates in other meetings such as those for use case creation and analysis. Performs unit testing and writes appropriate unit test plans to ensure requirements are satisfied. Assists in integration systems acceptance and other related testing as needed. Ensures developed code is optimized in order to meet client performance specifications associated with page rendering time by completing page performance tests. Technical Skills Required Experience in building large scale batch and data pipelines with data processing frameworks in AWS cloud platform using PySpark (on EMR) & Glue ETL Deep experience in developing data processing data manipulation tasks using PySpark such as reading data from external sources merge data perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code using CI/CD tools Bit bucket and Bamboo Strong AWS cloud computing experience. Extensive experience in Lambda S3 EMR Redshift Should have worked on Data Warehouse/Database technologies for at least 8 years. 7. Any AWS certification will be an added advantage.
Posted 2 weeks ago
4.0 - 9.0 years
20 - 25 Lacs
Hyderabad
Work from Office
We are seeking a skilled and experienced Cognos TM1 Developer with a strong background in ETL processes and Python development. The ideal candidate will be responsible for designing, developing, and supporting TM1 solutions, integrating data pipelines, and automating processes using Python. This role requires strong problem-solving skills, business acumen, and the ability to work collaboratively with cross-functional teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4+ years of hands-on experience with IBM Cognos TM1 / Planning Analytics. Strong knowledge of TI processes, rules, dimensions, cubes, and TM1 Web. Proven experience in building and managing ETL pipelines (preferably with tools like Informatica, Talend, or custom scripts). Proficiency in Python programming for automation, data processing, and system integration. Experience with REST APIs, JSON/XML data formats, and data extraction from external sources Preferred technical and professional experience strong SQL knowledge and ability to work with relational databases. Familiarity with Agile methodologies and version control systems (e.g., Git). 3.Excellent analytical, problem-solving, and communication skills
Posted 2 weeks ago
2.0 - 5.0 years
14 - 17 Lacs
Mumbai
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 2 weeks ago
2.0 - 5.0 years
14 - 17 Lacs
Kochi
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. * Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 2 weeks ago
0.0 - 2.0 years
6 - 10 Lacs
Hyderabad
Work from Office
As an AI/ML Engineer, you will be a core contributor to the implementation of intelligent, production-ready solutions that integrate seamlessly with Microsoft platforms. Working closely with our AI Architect, Data Scientist, and Product Owners, you will bring AI concepts to life building robust pipelines, interfaces, and integrations for ERP and business applications powered by large language models (LLMs), Azure AI, and Copilot. Your responsibilities will include: Implement intelligent AI-driven solutions, including LLM-powered agents, chat interfaces, and decision-support tools. Integrate AI capabilities with Microsoft platforms including Azure AI, Azure ML, Power Platform, and Dataverse. Enhance Microsoft Dynamics 365 ERP (Finance Supply Chain) with embedded AI features and Copilot experiences. Build scalable, modular data pipelines on Azure using e.g. Data Factory, Synapse Analytics, and other Microsoft integration tools. Design and maintain reusable AI components (e.g., prompt templates, embeddings, RAG pipelines) Automate data collection, preprocessing, evaluation, and retraining workflows. Assist with monitoring, evaluation, and optimization of AI models in production environments Write clean, maintainable code and contribute to shared AI engineering infrastructure. Collaborate cross-functionally to deliver AI functionality as part of larger product solutions What You Need to Succeed: Proven experience in AI/ML applications, ideally in enterprise or ERP settings Hands-on experience with Azure AI services, Copilot, and ERP systems, preferably Microsoft Dynamics 365 or similar platforms Familiarity with Power Platform, Power BI, and Dataverse Strong Python skills for backend logic, data processing, and model orchestration Experience building modular pipelines, APIs, and workflows in cloud environments Understanding of prompt engineering, RAG (Retrieval-Augmented Generation) and fine-tuning, and LLM evaluation best practices Ability to work independently and take ownership of projects while meeting deadlines Strong collaboration and communication skills you can align with architects, developers, and business stakeholders Bonus: Experience with MLOps, DevOps, CI/CD, and monitoring tools Why You Should Apply: Be Part of a Dynamic Community: Our supportive and vibrant environment ensures your contributions truly matter. Youll work with passionate professionals who are dedicated to making a difference. Drive Innovation and Excellence: As a STAEDEAN, you ll be at the forefront of innovation, developing solutions that transform industries and drive sustainable impact. Grow and Thrive: We are committed to fostering a culture of continuous improvement and shared success. Whether youre an experienced professional or just starting your career, youll find ample opportunities to develop your skills, take on new challenges, and grow. Make a Meaningful Impact: Your work at STAEDEAN will have a real impact on our customers, partners, and the world. Together, we strive to achieve extraordinary things, pushing the boundaries to create a better future.
Posted 2 weeks ago
0.0 - 1.0 years
0 Lacs
Gurugram
Work from Office
The company is looking to recruit an ambitious and energetic person Asset Intelligence Engineer, who will help the Candi Asset Management team to further deliver high performance on their solar portfolio. Given that Candi is taking solar into a new realm with a new set of tools, the role will require creativity, innovation and proactive involvement Specifically, the job requires taking ownership of the following for the Indian team: Daily real-time monitoring of portfolio of solar power plants and detection of issues and anomalies for the Technical Asset Management to solve (Pre-)diagnostics on issues found in the monitoring activity Support Technical Asset Management in tracking and solving of plant issues Maintaining database for monthly performance monitoring Validate and contextualize monthly performance indicators Performance improvement analytics Onboarding of new assets onto Asset Intelligence setup including data validation and monitoring portal integration Assisting Asset Data Acquisition with data logger troubleshooting and data validation Assist global Asset Intelligence team with ecosystem and tool development and maintenance Experience : Graduate Location: Gurgaon combined with working from home Hours: Full time. Job Requirements: University graduate (BTech / BEng / Electrical Engineering / Energy Systems / Data Science) Basic knowledge of electrical power systems Highly analytical in data processing and evaluation Critical thinking and data validation skills Data dashboarding/visualization knowledge Programming skills and experience in Python are a plus Fundamental Machine Learning knowledge is a plus Desire to work in an international environment, sharing ideas and learnings across borders and cultures Working Culture: You agree to live out the Candi values every day of your employment: this includes to put empathy before ego , being authentic no matter what , that we get it done as one and follow the Candi principle that less is more A dynamic, cross-functional team player, willing to take initiative on projects in the context of a multicultural scale-up trying to execute big, bold ideas. Having experience working for an international company, or having studied or worked abroad, is considered a plus. You must be open, honest, trustworthy, a strong communicator, and understand that what we get done as a team surpasses what we get done individually. Candi is an equal opportunities employer. Candi is unique because... We focus on helping businesses of all sizes in emerging markets get access to cheap, clean rooftop solar energy. We have a strong emphasis on a client-centric innovation-driven working culture. We are an international team where hybrid working is commonplace, where we trust our team members to actively carve out a role for themselves according to their skillset.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20183 Jobs | Dublin
Wipro
10025 Jobs | Bengaluru
EY
8024 Jobs | London
Accenture in India
6531 Jobs | Dublin 2
Amazon
6260 Jobs | Seattle,WA
Uplers
6244 Jobs | Ahmedabad
Oracle
5916 Jobs | Redwood City
IBM
5765 Jobs | Armonk
Capgemini
3771 Jobs | Paris,France
Tata Consultancy Services
3728 Jobs | Thane