Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 years
0 - 2 Lacs
bengaluru
Work from Office
Walkin drive for "Only commerce freshers" at Bangalore on 6th Sep 2025 Greeting from Infosys BPM Ltd., You are kindly invited for the Infosys BPM: Walk-In Drive on 6th Sep 2025 at Bangalore. Note: Please carry copy of this email to the venue and make sure you register your application before attending the walk-in. Please mention Candidate ID on top of the Resume https://career.infosys.com/jobdesc?jobReferenceCode=PROGEN-HRODIRECT-224839 Interview Information: Interview Date: 6th Sep 2025 Interview Time: 09:30 Am till 11:00 Pm Interview Venue: Infosys BPM limited, Gate 10, Electronic City Phase1, Bangalore, Karnataka, 560100 Documents to Carry: Please carry 2 set of updated CV (Hard Copy). Please carry Face Mask**. Mandatory to carry PAN Card or Passport for Identity proof. NOTE: Candidates Needs to bring Pan card without fail for Assessment. Interview Information: Interview Date: 6th Sep 25. Reporting Time: 09:30 AM till 11:00 AM Round 1 - Aptitude Assessment (10:00 AM to 12:00 PM) Round 2 - Ops Screening Face to Face interview (12:30 PM to 04:00 PM) Note - Post 11:30 AM (entry not allowed) Job Description: Job Location : Bangalore Qualification : B.COM/BBA/MBA/M.COM (Only 2024 & 2025 graduates are eligible for Interview) and 15 Years (10+2+3) of qualification is mandate Shifts: Night Shift Notice Period : Immediate joiners only Details of designation and Job description: Candidate needs to have 15 years of full-time education Proficient with basic computer knowledge Excellent website research & navigation skills Should be good in reading/understanding/interpretation of the content Candidate should be flexible to work in 24*7 environments, comfortable to work in night shifts (Rotational) Excellent verbal, written communication, interpretation and active listening skills Should have good command over English Grammar and Fluency in English Should be able to manage outbound calls in a timely manner following the scripts when handling different subjects/scenarios Ability to quickly and efficiently assimilate process knowledge Effective probing & analyzing skills and capable of doing a multi-tasking of voice & data entry No long leave in plan for at least next 1 year Work will be from office only (No WFH) Job criteria: Fresher Data Excellent verbal and written communication skills Must be a good team player Good problem - solving skills Ability to always remain professional and courteous with customers Analytical ability Working from office NOTE: 1. Kindly have a working cellphone with Microphone & Camera Access. Ensure Minimum upload / Download Speed of 2 MBPS 2. Candidates to carry earphones or headphones to the hiring venue for in-person interviews. Personal Laptops not allowed in the venue . 3 .Please make sure that candidate must do register his / her application before attending the walk-in. Please mention Candidate ID on top of the Resume. Career site - https://career.infosys.com/jobdesc?jobReferenceCode=PROGEN-HRODIRECT-224839 4. Candidates needs to bring Pan card without fail for Assessment. Regards, Infosys BPM Recruitment team
Posted 1 week ago
0.0 - 5.0 years
1 - 5 Lacs
mumbai, hyderabad, mumbai (all areas)
Work from Office
Accurately input, update & manage data in systems; verify and reconcile records; maintain data integrity and confidentiality; work with MS Office, Tally, or ERP; strong typing speed & attention to detail; ensure timely data delivery.
Posted 1 week ago
9.0 - 13.0 years
10 - 14 Lacs
pune
Work from Office
Job Description Reporting to the General Manager - Data Science,theMachine Learning Engineering Managerwill bea key part of the Data Science Management Team, leading the productionising of advanced analytics and Machine Learning initiatives. In this exciting new role, you will be expected to collaborate with technical and data teams, building out platform capability and processes to serve our Data Science and analytics community. Roles & Responsibilities What will you do in the role? With strong experience in managing technical teams and Machine Learning development lifecycles,you will be responsible for theday to day management of the Machine Learning Engineering team, ensuring they are delivering high quality solutions. You will also: Coach and develop the MLE team to leverage cutting edge Data Science, Machine Learning & AI technology. Maintain, evolve and develop our platforms to ensure that we have robust, scalable environments for the Data Scientists Provide technical guidance, support and mentorship to team members, helping them grow in their roles. Stay current with industry trends and advancements in AI and ML to support the company’s data strategy. Establish and enforce best practice for ML & AI model deployment and monitoring. What are the key skills / experience you’ll already have? You will be highly numerate with a strong technical background with a proven ability to maintain hands on technical contribution whilst managing a team. You will have: Experience of training, evaluating, deploying and maintaining Machine Learning models Sound understanding of data warehousing and ETL tools Strong technical skills in following key tools & technologies Python and PySpark for data processing Familiarity with Snowflake, RDBMS or other databases Experience of working with Cloud infrastructure Experience of building infrastructure as code using technologies such as Terraform Exposure to ML Frameworks like Scikit Learn/TensorFlow/PyTorch Strong drive to master new tools, platforms, and technologies. Methodical approach with good attention to detail Effective communication skills – Ability to work with international teams and across cultures. Roles and Responsibilities Job Description Reporting to the General Manager - Data Science,theMachine Learning Engineering Managerwill bea key part of the Data Science Management Team, leading the productionising of advanced analytics and Machine Learning initiatives. In this exciting new role, you will be expected to collaborate with technical and data teams, building out platform capability and processes to serve our Data Science and analytics community. Roles & Responsibilities What will you do in the role? With strong experience in managing technical teams and Machine Learning development lifecycles,you will be responsible for theday to day management of the Machine Learning Engineering team, ensuring they are delivering high quality solutions. You will also: Coach and develop the MLE team to leverage cutting edge Data Science, Machine Learning & AI technology. Maintain, evolve and develop our platforms to ensure that we have robust, scalable environments for the Data Scientists Provide technical guidance, support and mentorship to team members, helping them grow in their roles. Stay current with industry trends and advancements in AI and ML to support the company’s data strategy. Establish and enforce best practice for ML & AI model deployment and monitoring. What are the key skills / experience you’ll already have? You will be highly numerate with a strong technical background with a proven ability to maintain hands on technical contribution whilst managing a team. You will have: Experience of training, evaluating, deploying and maintaining Machine Learning models Sound understanding of data warehousing and ETL tools Strong technical skills in following key tools & technologies Python and PySpark for data processing Familiarity with Snowflake, RDBMS or other databases Experience of working with Cloud infrastructure Experience of building infrastructure as code using technologies such as Terraform Exposure to ML Frameworks like Scikit Learn/TensorFlow/PyTorch Strong drive to master new tools, platforms, and technologies. Methodical approach with good attention to detail Effective communication skills – Ability to work with international teams and across cultures.
Posted 1 week ago
5.0 - 10.0 years
5 - 10 Lacs
bengaluru, karnataka, india
On-site
Greetings from Future Focus Infotech!!! We have multiple opportunities Data Engineer(F2F interview on 17th May (Saturday) Exp: 5+yrs Location : Hyderabad Job Type- This is a Permanent position with Future Focus Infotech Pvt Ltd & you will be deputed with our client. A small glimpse about Future Focus Infotech Pvt Ltd. (Company URL: www.focusinfotech.com) If you are interested in above opportunity, send updated CV and below information to [HIDDEN TEXT]
Posted 1 week ago
3.0 - 8.0 years
3 - 7 Lacs
hyderabad, telangana, india
On-site
Tech Stalwart Solution Private Limited is looking for Sr. Data Engineer to join our dynamic team and embark on a rewarding career journey. Responsibilities: Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 1 week ago
3.0 - 8.0 years
6 - 7 Lacs
chennai, tamil nadu, india
On-site
Tech Stalwart Solution Private Limited is looking for Data Engineering to join our dynamic team and embark on a rewarding career journey. Responsibilities : Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 1 week ago
4.0 - 9.0 years
4 - 9 Lacs
hyderabad, telangana, india
On-site
Develop, configure, and optimize Apache DolphinScheduler workflows for data processing and automation. Design and implement ETL pipelines, job scheduling, and workflow orchestration solutions. Troubleshoot and resolve performance, scalability, and reliability issues in DolphinScheduler. Integrate DolphinScheduler with big data tools such as Hadoop, Spark, Flink, Hive, and Kafka. Work closely with data engineers, DevOps, and software teams to enhance workflow automation. Develop custom plugins and extensions for DolphinScheduler as needed. Monitor and optimize job execution, resource allocation, and workflow efficiency. Maintain best practices for CI/CD, version control, and infrastructure automation. Required Skills Qualifications: Strong experience with Apache DolphinScheduler in a production environment. Proficiency in Java, Python, or Scala for workflow scripting and automation. Experience with big data technologies like Hadoop, Spark, Hive, Flink, and Kafka. Understanding of workflow orchestration, DAG execution, and scheduling strategies. Strong problem-solving skills and ability to work in a fast-paced environment. Role: Data Engineer Industry Type: IT Services & Consulting Department: Data Science & Analytics Employment Type: Full Time, Permanent Role Category: Data Science & Machine Learning Education UG: Any Graduate PG: Any Postgraduate
Posted 1 week ago
4.0 - 9.0 years
4 - 9 Lacs
delhi, india
On-site
Develop, configure, and optimize Apache DolphinScheduler workflows for data processing and automation. Design and implement ETL pipelines, job scheduling, and workflow orchestration solutions. Troubleshoot and resolve performance, scalability, and reliability issues in DolphinScheduler. Integrate DolphinScheduler with big data tools such as Hadoop, Spark, Flink, Hive, and Kafka. Work closely with data engineers, DevOps, and software teams to enhance workflow automation. Develop custom plugins and extensions for DolphinScheduler as needed. Monitor and optimize job execution, resource allocation, and workflow efficiency. Maintain best practices for CI/CD, version control, and infrastructure automation. Required Skills Qualifications: Strong experience with Apache DolphinScheduler in a production environment. Proficiency in Java, Python, or Scala for workflow scripting and automation. Experience with big data technologies like Hadoop, Spark, Hive, Flink, and Kafka. Understanding of workflow orchestration, DAG execution, and scheduling strategies. Strong problem-solving skills and ability to work in a fast-paced environment. Role: Data Engineer Industry Type: IT Services & Consulting Department: Data Science & Analytics Employment Type: Full Time, Permanent Role Category: Data Science & Machine Learning Education UG: Any Graduate PG: Any Postgraduate
Posted 1 week ago
4.0 - 9.0 years
4 - 9 Lacs
pune, maharashtra, india
On-site
Develop, configure, and optimize Apache DolphinScheduler workflows for data processing and automation. Design and implement ETL pipelines, job scheduling, and workflow orchestration solutions. Troubleshoot and resolve performance, scalability, and reliability issues in DolphinScheduler. Integrate DolphinScheduler with big data tools such as Hadoop, Spark, Flink, Hive, and Kafka. Work closely with data engineers, DevOps, and software teams to enhance workflow automation. Develop custom plugins and extensions for DolphinScheduler as needed. Monitor and optimize job execution, resource allocation, and workflow efficiency. Maintain best practices for CI/CD, version control, and infrastructure automation. Required Skills Qualifications: Strong experience with Apache DolphinScheduler in a production environment. Proficiency in Java, Python, or Scala for workflow scripting and automation. Experience with big data technologies like Hadoop, Spark, Hive, Flink, and Kafka. Understanding of workflow orchestration, DAG execution, and scheduling strategies. Strong problem-solving skills and ability to work in a fast-paced environment. Role: Data Engineer Industry Type: IT Services & Consulting Department: Data Science & Analytics Employment Type: Full Time, Permanent Role Category: Data Science & Machine Learning Education UG: Any Graduate PG: Any Postgraduate
Posted 1 week ago
4.0 - 8.0 years
4 - 8 Lacs
hyderabad, telangana, india
On-site
Develop, test, and deploy data processing applications using Apache Spark and Scala. Optimize and tune Spark applications for better performance on large-scale data sets. Work with the Cloudera Hadoop ecosystem (e.g., HDFS, Hive, Impala, HBase, Kafka) to build data pipelines and storage solutions. Collaborate with data scientists, business analysts, and other developers to understand data requirements and deliver solutions. Design and implement high-performance data processing and analytics solutions. Ensure data integrity, accuracy, and security across all processing tasks. Troubleshoot and resolve performance issues in Spark, Cloudera, and related technologies. Implement version control and CI/CD pipelines for Spark applications. Required Skills Experience: Minimum 8 years of experience in application development. Strong hands on experience in Apache Spark, Scala, and Spark SQL for distributed data processing. Hands-on experience with Cloudera Hadoop (CDH) components such as HDFS, Hive, Impala, HBase, Kafka, and Sqoop. Familiarity with other Big Data technologies, including Apache Kafka, Flume, Oozie, and Nifi. Experience building and optimizing ETL pipelines using Spark and working with structured and unstructured data. Experience with SQL and NoSQL databases such as HBase, Hive, and PostgreSQL. Knowledge of data warehousing concepts, dimensional modeling, and data lakes. Ability to troubleshoot and optimize Spark and Cloudera platform performance. Familiarity with version control tools like Git and CI/CD tools (e.g., Jenkins, GitLab). Role: Software Development - Other Industry Type: IT Services & Consulting Department: Engineering - Software & QA Employment Type: Full Time, Permanent Role Category: Software Development Education UG: Any Graduate PG: Any Postgraduate
Posted 1 week ago
4.0 - 8.0 years
4 - 8 Lacs
pune, maharashtra, india
On-site
We are seeking a Machine Learning Engineer to design and develop robust analytics models using statistical and machine learning algorithms. In this role, you will work closely with product and engineering teams to solve complex business problems, identify data-driven opportunities, and create personalized experiences for customers. You will be responsible for building end-to-end machine learning solutions, implementing models in production, and working with various data frameworks and tools such as Python, Spark, and Databricks. Key Responsibilities:Analytics Model Development: Analyze use cases and design appropriate analytics models using statistical and machine learning algorithms tailored to specific business requirements. Develop machine learning algorithms to drive personalized customer experiences and provide actionable business insights. Apply expertise in data mining and machine learning techniques, including forecasting, prediction, segmentation, recommendation, and fraud detection. Data Engineering and Preparation: Extend and augment company data with third-party data to enrich analytics capabilities. Enhance data collection procedures to include necessary information for building analytics systems. Prepare raw data for analysis, including cleaning, imputing missing values, and standardizing data formats using Python data frameworks (e.g., Pandas, NumPy). Machine Learning Model Implementation: Implement machine learning models, considering both performance and scalability using tools like PySpark in Databricks. Design and build infrastructure to facilitate large-scale data analytics and experimentation. Work with tools like Jupyter Notebooks for data exploration and model development. What We re Looking For: Educational Background: Undergraduate or Graduate degree in Computer Science, Mathematics, Physics, or related fields. A PhD is preferred but not necessary. Experience: At least 5 years of experience in data analytics, with a strong understanding of core statistical algorithms such as classification and regression analysis. High-level knowledge of analytics use cases such as language analysis, assortment optimization, promotional planning, dynamic pricing, markdown optimization, labor scheduling, and optimization. Technical Skills: Strong experience with Python-based machine learning libraries (e.g., scikit-learn, TensorFlow, PyTorch). Proficiency in using analytics platforms like Databricks for large-scale data processing. At least 4 years of continuous experience with Spark, particularly PySpark implementation. Hands-on experience with data processing and analysis tools such as Pandas, NumPy, and Jupyter Notebooks. Role: Data Science & Machine Learning - Other Industry Type: IT Services & Consulting Department: Data Science & Analytics Employment Type: Full Time, Permanent Role Category: Data Science & Machine Learning Education UG: Any Graduate PG: Any Postgraduate
Posted 1 week ago
5.0 - 10.0 years
5 - 10 Lacs
hyderabad, telangana, india
On-site
We are seeking a skilled PySpark Data Engineer to join our team and drive the development of robust data processing and transformation solutions within our data platform. You will be responsible for designing, implementing, and maintaining PySpark-based applications to handle complex data processing tasks, ensure data quality, and integrate with diverse data sources. The ideal candidate possesses strong PySpark development skills, experience with big data technologies, and the ability to work in a fast-paced, data-driven environment. Key Responsibilities:Data Engineering Development: Design, develop, and test PySpark-based applications to process, transform, and analyze large-scale datasets from various sources, including relational databases, NoSQL databases, batch files, and real-time data streams. Implement efficient data transformation and aggregation using PySpark and relevant big data frameworks. Develop robust error handling and exception management mechanisms to ensure data integrity and system resilience within Spark jobs. Optimize PySpark jobs for performance, including partitioning, caching, and tuning of Spark configurations. Data Analysis and Transformation: Collaborate with data analysts, data scientists, and data architects to understand data processing requirements and deliver high-quality data solutions. Analyze and interpret data structures, formats, and relationships to implement effective data transformations using PySpark. Work with distributed datasets in Spark, ensuring optimal performance for large-scale data processing and analytics. Data Integration and ETL: Design and implement ETL (Extract, Transform, Load) processes to ingest and integrate data from various sources, ensuring consistency, accuracy, and performance. Integrate PySpark applications with data sources such as SQL databases, NoSQL databases, data lakes, and streaming platforms Qualifications and Skills: Bachelors degree in Computer Science, Information Technology, or a related field. 5+ years of hands-on experience in big data development, preferably with exposure to data-intensive applications. Strong understanding of data processing principles , techniques, and best practices in a big data environment. Proficiency in PySpark, Apache Spark, and related big data technologies for data processing, analysis, and integration. Experience with ETL development and data pipeline orchestration tools (e.g., Apache Airflow, Luigi). Strong analytical and problem-solving skills, with the ability to translate business requirements into technical solutions. Excellent communication and collaboration skills to work effectively with data analysts, data architects, and other team members. Role: Data Science & Machine Learning - Other Industry Type: IT Services & Consulting Department: Data Science & Analytics Employment Type: Full Time, Permanent Role Category: Data Science & Machine Learning Education UG: Any Graduate PG: Any Postgraduate
Posted 1 week ago
5.0 - 10.0 years
5 - 10 Lacs
delhi, india
On-site
We are seeking a skilled PySpark Data Engineer to join our team and drive the development of robust data processing and transformation solutions within our data platform. You will be responsible for designing, implementing, and maintaining PySpark-based applications to handle complex data processing tasks, ensure data quality, and integrate with diverse data sources. The ideal candidate possesses strong PySpark development skills, experience with big data technologies, and the ability to work in a fast-paced, data-driven environment. Key Responsibilities:Data Engineering Development: Design, develop, and test PySpark-based applications to process, transform, and analyze large-scale datasets from various sources, including relational databases, NoSQL databases, batch files, and real-time data streams. Implement efficient data transformation and aggregation using PySpark and relevant big data frameworks. Develop robust error handling and exception management mechanisms to ensure data integrity and system resilience within Spark jobs. Optimize PySpark jobs for performance, including partitioning, caching, and tuning of Spark configurations. Data Analysis and Transformation: Collaborate with data analysts, data scientists, and data architects to understand data processing requirements and deliver high-quality data solutions. Analyze and interpret data structures, formats, and relationships to implement effective data transformations using PySpark. Work with distributed datasets in Spark, ensuring optimal performance for large-scale data processing and analytics. Data Integration and ETL: Design and implement ETL (Extract, Transform, Load) processes to ingest and integrate data from various sources, ensuring consistency, accuracy, and performance. Integrate PySpark applications with data sources such as SQL databases, NoSQL databases, data lakes, and streaming platforms Qualifications and Skills: Bachelors degree in Computer Science, Information Technology, or a related field. 5+ years of hands-on experience in big data development, preferably with exposure to data-intensive applications. Strong understanding of data processing principles , techniques, and best practices in a big data environment. Proficiency in PySpark, Apache Spark, and related big data technologies for data processing, analysis, and integration. Experience with ETL development and data pipeline orchestration tools (e.g., Apache Airflow, Luigi). Strong analytical and problem-solving skills, with the ability to translate business requirements into technical solutions. Excellent communication and collaboration skills to work effectively with data analysts, data architects, and other team members. Role: Data Science & Machine Learning - Other Industry Type: IT Services & Consulting Department: Data Science & Analytics Employment Type: Full Time, Permanent Role Category: Data Science & Machine Learning Education UG: Any Graduate PG: Any Postgraduate
Posted 1 week ago
4.0 - 10.0 years
4 - 10 Lacs
pune, maharashtra, india
On-site
Infrastructure as Code (IaC): Design, implement, and manage infrastructure as code using Terraform for GCP environments. Ensure infrastructure configurations are scalable, reliable, and follow best practices. GCP Platform Management: Architect and manage GCP environments, including compute, storage, and networking components. Collaborate with cross-functional teams to understand requirements and provide scalable infrastructure solutions. Vertex AI Integration: Work closely with data scientists and AI specialists to integrate and optimize solutions using Vertex AI on GCP. Implement and manage machine learning pipelines and models within the Vertex AI environment. BigQuery Storage: Design and optimize data storage solutions using BigQuery Storage. Collaborate with data engineers and analysts to ensure efficient data processing and analysis. Wiz Security Control Integration: Integrate and configure Wiz Security Control for continuous security monitoring and compliance checks within GCP environments. Collaborate with security teams to implement and enhance security controls. Automation and Tooling: Implement automation and tooling solutions for monitoring, scaling, and managing GCP resources. Develop and maintain scripts and tools to streamline operational tasks. Security and Compliance: Implement security best practices in GCP environments, including identity and access management, encryption, and compliance controls. Must understand the Policies as a Code in GCP Perform regular security assessments and audits. Requirements: Bachelors Degree: Bachelor s degree in Computer Science, Information Technology, or a related field. MUST BE TIERED SCHOOL GCP Certification: GCP Professional Cloud Architect or similar certifications are highly desirable. Infrastructure as Code: Proven experience with Infrastructure as Code (IaC) using Terraform for GCP environments. Vertex AI and BigQuery: Hands-on experience with Vertex AI for generative AI solutions and BigQuery for data storage and analytics. Wiz Security Control: Experience with Wiz Security Control and its integration for continuous security monitoring in GCP environments. GCP Services: In-depth knowledge of various GCP services, including Compute Engine, Cloud Storage, VPC, and IAM. Automation Tools: Proficiency in scripting languages (eg, Python, Bash) and automation tools for GCP resource management. Security and Compliance: Strong understanding of GCP security best practices and compliance standards. Collaboration Skills: Excellent collaboration and communication skills, with the ability to work effectively in a team-oriented environment. Role: Solution Architect Industry Type: IT Services & Consulting Department: Engineering - Software & QA Employment Type: Full Time, Permanent Role Category: Software Development Education UG: Any Graduate PG: Any Postgraduate
Posted 1 week ago
4.0 - 10.0 years
4 - 10 Lacs
delhi, india
On-site
Infrastructure as Code (IaC): Design, implement, and manage infrastructure as code using Terraform for GCP environments. Ensure infrastructure configurations are scalable, reliable, and follow best practices. GCP Platform Management: Architect and manage GCP environments, including compute, storage, and networking components. Collaborate with cross-functional teams to understand requirements and provide scalable infrastructure solutions. Vertex AI Integration: Work closely with data scientists and AI specialists to integrate and optimize solutions using Vertex AI on GCP. Implement and manage machine learning pipelines and models within the Vertex AI environment. BigQuery Storage: Design and optimize data storage solutions using BigQuery Storage. Collaborate with data engineers and analysts to ensure efficient data processing and analysis. Wiz Security Control Integration: Integrate and configure Wiz Security Control for continuous security monitoring and compliance checks within GCP environments. Collaborate with security teams to implement and enhance security controls. Automation and Tooling: Implement automation and tooling solutions for monitoring, scaling, and managing GCP resources. Develop and maintain scripts and tools to streamline operational tasks. Security and Compliance: Implement security best practices in GCP environments, including identity and access management, encryption, and compliance controls. Must understand the Policies as a Code in GCP Perform regular security assessments and audits. Requirements: Bachelors Degree: Bachelor s degree in Computer Science, Information Technology, or a related field. MUST BE TIERED SCHOOL GCP Certification: GCP Professional Cloud Architect or similar certifications are highly desirable. Infrastructure as Code: Proven experience with Infrastructure as Code (IaC) using Terraform for GCP environments. Vertex AI and BigQuery: Hands-on experience with Vertex AI for generative AI solutions and BigQuery for data storage and analytics. Wiz Security Control: Experience with Wiz Security Control and its integration for continuous security monitoring in GCP environments. GCP Services: In-depth knowledge of various GCP services, including Compute Engine, Cloud Storage, VPC, and IAM. Automation Tools: Proficiency in scripting languages (eg, Python, Bash) and automation tools for GCP resource management. Security and Compliance: Strong understanding of GCP security best practices and compliance standards. Collaboration Skills: Excellent collaboration and communication skills, with the ability to work effectively in a team-oriented environment. Role: Solution Architect Industry Type: IT Services & Consulting Department: Engineering - Software & QA Employment Type: Full Time, Permanent Role Category: Software Development Education UG: Any Graduate PG: Any Postgraduate
Posted 1 week ago
6.0 - 9.0 years
6 - 9 Lacs
delhi, india
On-site
6 + years in software engineering, with strong fundamentals in distributed systems design and development Experience creating large scale data processing pipelines for streaming and computing with data technologies such as any of the following: AWS, Snowflake (or any Relational Database), Kafka Good working knowledge of containers (Kubernetes/OpenShift/EKS or similar) and experience building cloud native applications that run on containers. Proficiency in at least one of the following programming languages: Java, Scala, Python, Go. Enthusiasm and ability to pick up new languages and concepts as necessary. Ability to communicate thoughtfully, demonstrating problem-solving skills and a learning mentality to build long-term relationships. Experience writing well structured, quality code that s easily maintainable by others . Role: Data Platform Engineer Industry Type: IT Services & Consulting Department: Engineering - Software & QA Employment Type: Full Time, Permanent Role Category: Software Development Education UG: Any Graduate PG: Any Postgraduate
Posted 1 week ago
6.0 - 9.0 years
6 - 9 Lacs
kolkata, west bengal, india
On-site
6 + years in software engineering, with strong fundamentals in distributed systems design and development Experience creating large scale data processing pipelines for streaming and computing with data technologies such as any of the following: AWS, Snowflake (or any Relational Database), Kafka Good working knowledge of containers (Kubernetes/OpenShift/EKS or similar) and experience building cloud native applications that run on containers. Proficiency in at least one of the following programming languages: Java, Scala, Python, Go. Enthusiasm and ability to pick up new languages and concepts as necessary. Ability to communicate thoughtfully, demonstrating problem-solving skills and a learning mentality to build long-term relationships. Experience writing well structured, quality code that s easily maintainable by others . Role: Data Platform Engineer Industry Type: IT Services & Consulting Department: Engineering - Software & QA Employment Type: Full Time, Permanent Role Category: Software Development Education UG: Any Graduate PG: Any Postgraduate
Posted 1 week ago
3.0 - 7.0 years
3 - 7 Lacs
hyderabad, telangana, india
On-site
We are seeking a Data Engineer to help build and integrate a Generative AI-powered conversational assistant, into our website and mobile app. This role is crucial in handling data pipelines, model training, and infrastructure setup to deliver a seamless, privacy-compliant experience for users seeking personalized health insights. The Data Engineer will work closely with our AI and software development teams to design scalable data solutions within Google Cloud Platform (GCP) to support this next-generation AI service. Key Responsibilities Data Integration & Pipeline Development : Design and implement data pipelines to support training and finetuning of knowledge base and user data, ensuring data quality, scalability, and efficiency. Data Processing & Transformation : Develop data transformation processes to prepare data for Natural Language Processing (NLP) models, facilitating personalized and accurate health recommendations. Privacy & Security Compliance : Ensure all data handling practices comply with privacy and security standards, focusing on user data protection within AI model training and deployment. Infrastructure Setup & Management : Build and maintain foundational cloud infrastructure on GCP to host, deploy, and scale securely and efficiently across platforms. Collaboration with AI & DevOps Teams : Partner with AI/ML and DevOps teams to finetune, test, and optimize NLP models for production, focusing on deployment performance and user experience. Website & Mobile Integration Support : Work alongside frontend developers to ensure smooth data flow and integration between the backend, website and mobile app. Monitoring & Optimization : Implement monitoring, logging, and automated alerts to ensure data pipelines, model interactions, and infrastructure meet performance and reliability requirements. Qualifications Education : Bachelor s or Master s in Computer Science, Data Engineering, or a related field. Experience : 3+ years in data engineering, preferably within Generative AI or NLP-focused projects. Hands-on experience with Google Cloud Platform (GCP), including BigQuery, Dataflow, and Cloud Storage. Proven ability in data pipeline design and data transformations for AI model training. Skills : Strong programming skills in Python and familiarity with SQL. Experience with DevOps tools (e.g., Kubernetes, Docker) and CI/CD pipelines in GCP. Proficient in data management practices, data privacy, and security protocols. Familiarity with AI/ML workflows, specifically NLP model training and finetuning. Nice to Have : Experience working with Contentful, or React Native integrations. Knowledge of MLOps practices to support continuous model training and deployment. Role: Data Engineer Industry Type: IT Services & Consulting Department: Engineering - Software & QA Employment Type: Full Time, Permanent Role Category: Software Development
Posted 1 week ago
3.0 - 7.0 years
3 - 7 Lacs
delhi, india
On-site
We are seeking a Data Engineer to help build and integrate a Generative AI-powered conversational assistant, into our website and mobile app. This role is crucial in handling data pipelines, model training, and infrastructure setup to deliver a seamless, privacy-compliant experience for users seeking personalized health insights. The Data Engineer will work closely with our AI and software development teams to design scalable data solutions within Google Cloud Platform (GCP) to support this next-generation AI service. Key Responsibilities Data Integration & Pipeline Development : Design and implement data pipelines to support training and finetuning of knowledge base and user data, ensuring data quality, scalability, and efficiency. Data Processing & Transformation : Develop data transformation processes to prepare data for Natural Language Processing (NLP) models, facilitating personalized and accurate health recommendations. Privacy & Security Compliance : Ensure all data handling practices comply with privacy and security standards, focusing on user data protection within AI model training and deployment. Infrastructure Setup & Management : Build and maintain foundational cloud infrastructure on GCP to host, deploy, and scale securely and efficiently across platforms. Collaboration with AI & DevOps Teams : Partner with AI/ML and DevOps teams to finetune, test, and optimize NLP models for production, focusing on deployment performance and user experience. Website & Mobile Integration Support : Work alongside frontend developers to ensure smooth data flow and integration between the backend, website and mobile app. Monitoring & Optimization : Implement monitoring, logging, and automated alerts to ensure data pipelines, model interactions, and infrastructure meet performance and reliability requirements. Qualifications Education : Bachelor s or Master s in Computer Science, Data Engineering, or a related field. Experience : 3+ years in data engineering, preferably within Generative AI or NLP-focused projects. Hands-on experience with Google Cloud Platform (GCP), including BigQuery, Dataflow, and Cloud Storage. Proven ability in data pipeline design and data transformations for AI model training. Skills : Strong programming skills in Python and familiarity with SQL. Experience with DevOps tools (e.g., Kubernetes, Docker) and CI/CD pipelines in GCP. Proficient in data management practices, data privacy, and security protocols. Familiarity with AI/ML workflows, specifically NLP model training and finetuning. Nice to Have : Experience working with Contentful, or React Native integrations. Knowledge of MLOps practices to support continuous model training and deployment. Role: Data Engineer Industry Type: IT Services & Consulting Department: Engineering - Software & QA Employment Type: Full Time, Permanent Role Category: Software Development
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
As a Database Management Specialist at My Digital Shelf, you will play a crucial role in overseeing data handling, managing databases, and supporting data-driven initiatives for our conferences and the company. Your attention to detail, proactive approach, and experience in database management will be instrumental in ensuring the functionality and reliability of our data systems. Your responsibilities will include setting up and testing new database systems, monitoring performance and efficiency, and implementing improvements as necessary. You will be responsible for designing and preparing comprehensive reports for management, developing protocols for effective data processing, and creating complex query definitions for efficient data extraction and analysis. In addition, you will play a key role in training colleagues on data input and extraction processes, executing email marketing campaigns, and coordinating social media outreach initiatives. Your ability to multitask, prioritize, and work under pressure will be essential in executing these tasks efficiently and effectively. Furthermore, you will be involved in coordinating logistics for events, maintaining event databases with meticulous attention to detail, and providing ad hoc support to company Directors as needed. Your excellent IT skills, organizational abilities, and strong interpersonal communication skills will be crucial in successfully fulfilling these responsibilities. To qualify for this role, you should have graduated to degree level or equivalent in a relevant discipline, ideally Marketing, Social Sciences, Humanities, Languages, or similar. Fluency in written and spoken English, the ability to work well under pressure, and excellent IT skills are essential requirements for this position. At My Digital Shelf, we are committed to fostering a diverse and inclusive workforce where every individual can thrive. If you are ready to take on a multifaceted role that impacts the core of our business development and contribute to our growth and success, we encourage you to apply for this exciting opportunity.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Computer Vision Researcher, you will be responsible for designing and optimizing deep learning models, building prototypes, and collaborating closely with cross-functional teams to ensure the seamless integration and real-time deployment of computer vision technologies into robotic systems. Joining our ambitious Advanced Robotics team, dedicated to pushing the boundaries of robotics and launching innovative products, you will be based out of Noida Headquarters. In this role, you will play a crucial part in developing a robot vision system that enables humanoid robots to navigate, interact, and learn from complex and dynamic environments. Your responsibilities will include conducting advanced research in computer vision, designing and training deep learning models, implementing efficient data processing pipelines, building prototypes and proof-of-concept systems, evaluating and optimizing the performance of computer vision models, integrating third-party computer vision libraries, and collaborating with product teams, hardware engineers, and software developers. You will also mentor junior researchers and engineers on computer vision concepts and technologies, providing guidance in experimental design, data analysis, and algorithm development. To qualify for this role, you should have a PhD or master's degree in computer science, Computer Vision, AI, Robotics, or a related field, along with 5-8 years of experience. Strong understanding of computer vision techniques, proficiency in deep learning frameworks, experience with Visual SLAM, LiDAR SLAM, VIO, CNNs, RNNs, GANs, Visual Language Action (VLA), traditional computer vision techniques, programming skills in C++, Python, data processing, augmentation, computer vision libraries, GPU programming, machine learning algorithms, model evaluation metrics, and staying current with the latest research trends are essential requirements for this role.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
This role is for a GCP Data Engineer who can build cloud analytics platforms to meet expanding business requirements with speed and quality using lean Agile practices. You will work on analysing and manipulating large datasets supporting the enterprise by activating data assets to support Enabling Platforms and Analytics in the GCP. You will be responsible for designing the transformation and modernization on GCP. Experience with large scale solutions and operationalizing of data warehouses, data lakes and analytics platforms on Google Cloud Platform or other cloud environment is a must. We are looking for candidates who have a broad set of technology skills across these areas and who can demonstrate an ability to design right solutions with appropriate combination of GCP and 3rd party technologies for deploying on the Google Cloud Platform. Responsibilities Develop technical solutions for Data Engineering and work between 1 PM and 10 PM IST to enable more overlap time with European and North American counterparts. This role will work closely with teams in US and as well as Europe to ensure robust, integrated migration aligned with Global Data Engineering patterns and standards. Design and deploying data pipelines with automated data lineage. Develop, reusable Data Engineering patterns. Design and build production data engineering solutions to deliver pipeline patterns using Google Cloud Platform (GCP) services: BigQuery, DataFlow, Pub/Sub, BigTable, Data Fusion, DataProc, Cloud Composer, Cloud SQL, Compute Engine, Cloud Functions, and App Engine. Ensure timely migration of Ford Credit Europe FCE Teradata warehouse to GCP and to enable Teradata platform decommissioning by end 2025 with a strong focus on ensuring continued, robust, and accurate Regulatory Reporting capability. Position Opportunities The Data Engineer role within FC Data Engineering supports the following opportunities for successful individuals: Key player in a high priority program to unlock the potential of Data Engineering Products and Services & secure operational resilience for Ford Credit Europe. Explore and implement leading edge technologies, tooling and software development best practices. Experience of managing data warehousing and product delivery within a financially regulated environment. Experience of collaborative development practices within an open-plan, team-designed environment. Experience of working with third party suppliers / supplier management. Continued personal and professional development with support and encouragement for further certification. Qualifications Essential: 5+ years of experience in data engineering, with a focus on data warehousing and ETL development (including data modelling, ETL processes, and data warehousing principles). 5+ years of SQL development experience. 3+ years of Cloud experience (GCP preferred) with solutions designed and implemented at production scale. Strong understanding of key GCP services, especially those related to data processing (Batch/Real Time) leveraging Terraform, BigQuery, Dataflow, DataFusion, Dataproc, Cloud Build, AirFlow, and Pub/Sub, alongside and storage including Cloud Storage, Bigtable, Cloud Spanner. Excellent problem-solving skills, with the ability to design and optimize complex data pipelines. Strong communication and collaboration skills, capable of working effectively with both technical and non-technical stakeholders as part of a large global and diverse team. Experience developing with micro service architecture from container orchestration framework. Designing pipelines and architectures for data processing. Strong evidence of self-motivation to continuously develop own engineering skills and those of the team. Proven record of working autonomously in areas of high ambiguity, without day-to-day supervisory support. Evidence of a proactive mindset to problem solving and willingness to take the initiative. Strong prioritization, co-ordination, organizational and communication skills, and a proven ability to balance workload and competing demands to meet deadlines. Desired: Professional Certification in GCP (e.g., Professional Data Engineer). Data engineering or development experience gained in a regulated, financial environment. Experience with Teradata to GCP migrations is a plus. Strong expertise in SQL and experience with programming languages such as Python, Java, and/or Apache Beam. Experience of coaching and mentoring Data Engineers. Experience with data security, governance, and compliance best practices in the cloud. An understanding of current architecture standards and digital platform services strategy.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
The job involves taking on the role of a Gen AI Developer + Architect with specific responsibilities related to Azure AI services. Your main focus will be on developing and architecting solutions using Azure OpenAI Service and other AI technologies. You should have a deep understanding of Azure AI services, especially Azure OpenAI Service, and be proficient in large language models (LLMs) and their architectures. Additionally, you should possess expertise in Azure Machine Learning and Azure Cognitive Services, along with a strong knowledge of Azure cloud architecture and best practices. In this role, you will be expected to work on prompt engineering and fine-tuning LLMs, while also ensuring compliance with AI ethics and responsible AI principles. Familiarity with Azure security and compliance standards is essential, as well as experience in developing chatbots, data integration, prompting, and AI source interaction. Your technical skills should include strong programming abilities, particularly in Python, and expertise in Azure AI services, including Azure OpenAI Service. Proficiency in machine learning frameworks such as PyTorch and TensorFlow is necessary, along with experience in Azure DevOps and CI/CD pipelines. Knowledge of distributed computing and scalable AI systems on Azure, as well as familiarity with Azure Kubernetes Service (AKS) for AI workloads, will be beneficial. An understanding of data processing and ETL pipelines in Azure is also required to excel in this role.,
Posted 1 week ago
6.0 - 11.0 years
2 - 13 Lacs
kolkata, mumbai, new delhi
Work from Office
We are seeking a highly skilled AWS Data Engineer with strong expertise in designing, building, and optimizing large-scale data pipelines and data lake/warehouse solutions on AWS. The ideal candidate will have extensive experience in data engineering, ETL development, cloud-based data platforms, and modern data architecture practices . Key Responsibilities Design, build, and maintain scalable data pipelines and ETL workflows using AWS services. Develop, optimize, and maintain data lake and data warehouse solutions (e.g., S3, Redshift, Glue, Athena, EMR, Snowflake on AWS). Work with structured and unstructured data from multiple sources, ensuring data quality, governance, and security . Collaborate with data scientists, analysts, and business stakeholders to enable analytics and AI/ML use cases . Implement best practices for data ingestion, transformation, storage, and performance optimization. Monitor and troubleshoot data pipelines to ensure reliability and scalability. Contribute to data modeling, schema design, partitioning, and indexing strategies . Support real-time and batch data processing using tools like Kinesis, Kafka, or Spark. Ensure compliance with security and regulatory standards (IAM, encryption, GDPR, HIPAA, etc.). Required Skills & Experience 6+ years of experience in Data Engineering, with at least 3+ years on AWS cloud ecosystem. Strong programming skills in Python, PySpark, or Scala . Hands-on experience with AWS services : Data Storage: S3, DynamoDB, RDS, Redshift Data Processing: Glue, EMR, Lambda, Step Functions Query & Analytics: Athena, Redshift Spectrum, QuickSight Streaming: Kinesis / MSK (Kafka) Strong experience with SQL (query optimization, stored procedures, performance tuning). Knowledge of ETL/ELT tools (Glue, AWS Data Pipeline, Informatica, Talend, DBT preferred). Experience with data modeling (dimensional, star/snowflake schema). Knowledge of DevOps practices for data (CI/CD, IaC using Terraform/CloudFormation). Familiarity with monitoring & logging tools (CloudWatch, Datadog, ELK, Prometheus). Strong understanding of data governance, lineage, cataloging (Glue Data Catalog, Collibra, Alation). Preferred Skills (Good to Have) Experience with Snowflake, Databricks, or Apache Spark on AWS . Exposure to machine learning pipelines (SageMaker, Feature Store). Knowledge of containerization & orchestration (Docker, Kubernetes, ECS, EKS). Exposure to Agile methodology and DataOps practices . AWS certifications (AWS Certified Data Analytics Specialty / Solutions Architect / Big Data). Education Bachelor s/Master s degree in Computer Science, Information Technology, Data Engineering, or related field .
Posted 1 week ago
5.0 - 9.0 years
9 - 13 Lacs
kolkata, mumbai, new delhi
Work from Office
We are seeking a Lead Engineer Data Engineering to design and drive the next generation of our data platforms. This role requires a strong blend of architecture expertise, hands-on engineering in big data technologies, and leadership in building scalable, reliable, and future-ready data systems. The candidate will play a critical role in architecting data platforms, setting engineering standards, and mentoring the data engineering team while partnering with cross-functional stakeholders to enable high-quality data-driven decision-making. Key Responsibilities Architecture & Design: Lead the design and architecture of large-scale, distributed data pipelines and platforms with a focus on scalability, performance, and cost-effectiveness. Development: Build, optimize, and maintain data pipelines and data products using Apache Spark, Python, and GCP-native services (BigQuery, Dataflow, Pub/Sub, Dataproc, Cloud Storage, etc.). Data Modeling: Design and implement logical and physical data models that support analytical and operational use cases. Leadership: Mentor and coach junior data engineers, drive code quality, enforce best practices, and promote knowledge sharing. Collaboration: Work closely with Data Scientists, Analysts, and Product/Engineering teams to ensure data solutions meet business requirements. Best Practices: Establish data engineering standards around CI/CD pipelines, testing, monitoring, and governance. Innovation: Evaluate emerging tools, technologies, and practices to evolve the data ecosystem. Required Skills & Qualifications Strong expertise in Data Engineering Architecture and building distributed, large-scale systems. Hands-on experience with Apache Spark (batch & streaming). Proficiency in Python for data engineering. In-depth knowledge of Google Cloud Platform (GCP) services for big data and analytics. Strong understanding of data modeling, data warehousing concepts, and schema design patterns. Experience with data pipeline orchestration tools (Airflow, Cloud Composer, etc.). Knowledge of CI/CD for data applications and DevOps practices. Excellent problem-solving skills, communication abilities, and stakeholder management. Preferred Qualifications Experience in leading a data engineering team in a fast-paced, product-driven environment. Exposure to real-time data processing solutions (Kafka, Pub/Sub, Spark Streaming, etc.). Familiarity with modern data stack tools (dbt, Looker, etc.). Bachelor s/Master s degree in Computer Science, Engineering, or a related technical discipline.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |