Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 - 15.0 years
0 - 20 Lacs
noida
Work from Office
Roles and Responsibilities : Design, develop, test, deploy, and maintain large-scale data pipelines using GCP Data Flow. Collaborate with cross-functional teams to gather requirements and design solutions for complex data processing needs. Develop automated testing frameworks to ensure high-quality delivery of data products. Troubleshoot issues related to pipeline failures or errors in a timely manner. Job Requirements : 12-15 years of experience in software development with expertise in data engineering on Google Cloud Platform (GCP). Strong understanding of GCP cloud storage services such as BigQuery, Cloud Storage Bucket, etc. Experience with cloud orchestration tools like Kubernetes Engine (K8s) or Cloud Run.
Posted 12 hours ago
10.0 - 18.0 years
0 Lacs
pune, maharashtra
On-site
We are looking for a seasoned Senior Data Architect with extensive knowledge in Databricks and Microsoft Fabric to join our team. In this role, you will be responsible for leading the design and implementation of scalable data solutions for BFSI and HLS clients. As a Senior Data Architect specializing in Databricks and Microsoft Fabric, you will play a crucial role in architecting and implementing secure, high-performance data solutions on the Databricks and Azure Fabric platforms. Your responsibilities will include leading discovery workshops, designing end-to-end data pipelines, optimizing workloads for performance and cost efficiency, and ensuring compliance with data governance, security, and privacy policies. You will collaborate with client stakeholders and internal teams to deliver technical engagements and provide guidance on best practices for Databricks and Microsoft Azure. Additionally, you will stay updated on the latest industry developments and recommend new data architectures, technologies, and standards to enhance our solutions. As a subject matter expert in Databricks and Azure Fabric, you will be responsible for delivering workshops, webinars, and technical presentations, as well as developing white papers and reusable artifacts to showcase our company's value proposition. You will also work closely with Databricks partnership teams to contribute to co-marketing and joint go-to-market strategies. In terms of business development support, you will collaborate with sales and pre-sales teams to provide technical guidance during RFP responses and identify upsell and cross-sell opportunities within existing accounts. To be successful in this role, you should have a minimum of 10+ years of experience in data architecture, engineering, or analytics roles, with specific expertise in Databricks and Azure Fabric. You should also possess strong communication and presentation skills, as well as the ability to collaborate effectively with diverse teams. Additionally, certifications in cloud platforms such as AWS and Microsoft Azure will be advantageous. In return, we offer a competitive salary and benefits package, a culture focused on talent development, and opportunities to work with cutting-edge technologies. At Persistent, we are committed to fostering diversity and inclusion in the workplace and invite applications from all qualified individuals. We provide a supportive and inclusive environment where all employees can thrive and unleash their full potential. Join us at Persistent and accelerate your growth professionally and personally while making a positive impact on the world with the latest technologies.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
The role of warehousing and logistics systems is becoming increasingly crucial in enhancing the competitiveness of various companies and contributing to the overall efficiency of the global economy. Modern intra-logistics solutions integrate cutting-edge mechatronics, sophisticated software, advanced robotics, computational perception, and AI algorithms to ensure high throughput and streamlined processing for critical commercial logistics functions. Our Warehouse Execution Software is designed to optimize intralogistics and warehouse automation by utilizing advanced optimization techniques. By synchronizing discrete logistics processes, we have created a real-time decision engine that maximizes labor and equipment efficiency. Our software empowers customers with operational agility essential for meeting the demands of an Omni-channel environment. We are seeking a dynamic individual who can develop state-of-the-art MLOps and DevOps frameworks for AI model deployment. The ideal candidate should possess expertise in cloud technologies, deployment architectures, and software production standards. Moreover, effective collaboration within interdisciplinary teams is key to successfully guiding products through the development cycle. **Core Job Responsibilities:** - Develop comprehensive pipelines covering the ML lifecycle from data ingestion to model evaluation. - Collaborate with AI scientists to expedite the operationalization of ML algorithms. - Establish CI/CD/CT pipelines for ML algorithms. - Implement model deployment both in cloud and on-premises edge environments. - Lead a team of DevOps/MLOps engineers. - Stay updated on new tools, technologies, and industry best practices. **Key Qualifications:** - Master's degree in Computer Science, Software Engineering, or a related field. - Proficiency in Cloud Platforms, particularly GCP, and relevant skills like Docker, Kubernetes, and edge computing. - Familiarity with task orchestration tools such as MLflow, Kubeflow, Airflow, Vertex AI, and Azure ML. - Strong programming skills, preferably in Python. - Robust DevOps expertise including Linux/Unix, testing, automation, Git, and build tools. - Knowledge of data engineering tools like Beam, Spark, Pandas, SQL, and GCP Dataflow is advantageous. - Minimum 5 years of experience in relevant fields, including academic exposure. - At least 3 years of experience in managing a DevOps/MLOps team.,
Posted 2 weeks ago
4.0 - 7.0 years
4 - 7 Lacs
Bengaluru, Karnataka, India
On-site
Line of Service: Advisory Industry/Sector: Not Applicable Specialism: Data, Analytics & AI Management Level: Senior Associate Job Description & Summary: At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilize advanced analytics techniques to help clients optimize their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimize business performance and enhance competitive advantage. Why PwC: At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes, and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences, and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm's growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities: Job Accountabilities: Hands-on experience in GCP Data Components (BigQuery, Data Fusion, CloudSQL, etc.) Understanding of Data Lake and Data Warehouse Manage the DevOps lifecycle of the project (Code Repository, Build, Release) Good to Have: End-to-end BI Landscape knowledge Participate in unit and integration testing Interaction with business users for requirement understanding and UAT Understanding of Data Security and Data Compliance Agile understanding Project Documentation Understanding Good SQL knowledge Certification (Good to have) Domain knowledge of different industry sectors Mandatory Skill Sets: GCP Preferred Skill Sets: GCP Years of Experience Required: 4 to 7 years Education Qualification: Graduate Engineer or Management Graduate Degrees/Field of Study Required: Bachelor of Engineering Degrees/Field of Study Preferred: Not specified Certifications: Not specified Required Skills: GCP Auditing Optional Skills: Accepting Feedback Analytical Thinking Business Case Development Business Data Analytics Business Intelligence and Reporting Tools (BIRT) Business Intelligence Development Studio Communication Competitive Advantage Continuous Process Improvement Creativity Data Analysis and Interpretation Data Architecture Database Management System (DBMS) Data Collection Data Pipeline Data Quality Data Science Data Visualization Embracing Change Emotional Regulation Empathy Inclusion Industry Trend Analysis Travel Requirements: Not Specified Available for Work Visa Sponsorship No
Posted 1 month ago
6.0 - 10.0 years
30 - 35 Lacs
Bengaluru
Work from Office
We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.
Posted 1 month ago
6.0 - 10.0 years
6 - 10 Lacs
Mumbai, Maharashtra, India
On-site
KEY ACCOUNTABILITIES 70%of Time- Excellent Technical Work Design, develop, and optimize data pipelines and ETL/ELT workflows using GCP services (BigQuery, Dataflow, Pub/Sub, Cloud Functions, etc.) Build and maintain data architecture that supports structured and unstructured data from multiple sources Work closely with statisticians and data scientists to provision clean, transformed datasets for advanced modeling and analytics Enable self-service BI through efficient data modeling and provisioning in tools like Looker, Power BI, or Tableau Implement data quality checks, monitoring, and documentation to ensure high data reliability and accuracy Collaborate with DevOps/Cloud teams to ensure data infrastructure is secure, scalable, and cost-effective Support and optimize workflows for data exploration, experimentation, and productization of models Participate in data governance efforts, including metadata management, data cataloging, and access controls 15%of Time- Client Consultation and Business Partnering Work effectively with clients to identify client needs and success criteria, and translate into clear project objectives, timelines, and plans. Be responsive and timely in sharing project updates, responding to client queries, and delivering on project commitments. Clearly communicate analysis, conclusions, insights, and conclusions to clients using written reports and real-time meetings. 10%of Time-Innovation, Continuous Improvement (CI), and Personal Development Learn and apply a CI mindset to work, seeking opportunities for improvements in efficiency and client value. Identify new resources, develop new methods, and seek external inspiration to drive innovations in our work processes. Continually build skills and knowledge in the fields of statistics, and the relevant sciences. 5% of Time-Administration Participate in all required training (Safety, HR, Finance, CI, other) and actively GKS, and ITQ meetings, events, and activities. Complete other administrative tasks as required. MINIMUM QUALIFICATIONS Minimum Degree Requirements: Masters from an accredited university Minimum 6 years of related experience required Specific Job Experience or Skills Needed 6+ years of experience in data engineering roles, including strong hands-on GCP experience Proficiency in GCP services like BigQuery, Cloud Storage, Cloud Composer (Airflow), Dataflow, Pub/Sub Strong SQL skills and experience working with large-scale data warehouses Solid programming skills in Python and/or Java/Scala Experience with data modeling, schema design, and performance tuning Familiarity with CI/CD, Git, and infrastructure-as-code principles (Terraform preferred) Strong communication and collaboration skills across cross-functional teams For Global Knowledge Services: Ability to effectively work cross-functionally with internal/global team members. High self-motivation, with the ability to work both independently and in teams. Excels at driving projects to completion, with attention to detail. Ability to exercise judgment in handling confidential and proprietary information. Ability to effectively prioritize, multi-task, and execute tasks according to a plan. Able to work on multiple priorities and projects simultaneously. Demonstrated creative problem-solving abilities, attention to detail, ability to think outside the box. PREFERRED QUALIFICATIONS Preferred Major Area of Study: Master s degree in Computer Science, Engineering, Data Science, or a related field Preferred Professional Certifications: GCP Preferred 6 years of related experience
Posted 2 months ago
5.0 - 10.0 years
20 - 35 Lacs
Pune, Gurugram
Work from Office
In one sentence We are seeking a skilled Database Migration Specialist with deep expertise in mainframe modernization and data migration to cloud platforms such as AWS, Azure, or GCP. The ideal candidate will have hands-on experience migrating legacy systems (COBOL, DB2, IMS, VSAM, etc.) to modern cloud-native databases like PostgreSQL, Oracle, or NoSQL. What will your job look like? Lead and execute end-to-end mainframe-to-cloud database migration projects. Analyze legacy systems (z/OS, Unisys) and design modern data architectures. Extract, transform, and load (ETL) complex datasets ensuring data integrity and taxonomy alignment. Collaborate with cloud architects and application teams to ensure seamless integration. Optimize performance and scalability of migrated databases. Document migration processes, tools, and best practices. Required Skills & Experience 5+ years in mainframe systems (COBOL, CICS, DB2, IMS, JCL, VSAM, Datacom). Proven experience in cloud migration (AWS DMS, Azure Data Factory, GCP Dataflow, etc.). Strong knowledge of ETL tools, data modeling, and schema conversion. Experience with PostgreSQL, Oracle, or other cloud-native databases. Familiarity with data governance, security, and compliance in cloud environments. Excellent problem-solving and communication skills.
Posted 2 months ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Remote
Job Description Job Title: Offshore Data Engineer Base Location: Bangalore Work Mode: Remote Experience: 5+ Years Job Description: We are looking for a skilled Offshore Data Engineer with strong experience in Python, SQL, and Apache Beam . Familiarity with Java is a plus. The ideal candidate should be self-driven, collaborative, and able to work in a fast-paced environment . Key Responsibilities: Design and implement reusable, scalable ETL frameworks using Apache Beam and GCP Dataflow. Develop robust data ingestion and transformation pipelines using Python and SQL . Integrate Kafka for real-time data streams alongside batch workloads. Optimize pipeline performance and manage costs within GCP services. Work closely with data analysts, data architects, and product teams to gather and understand data requirements. Manage and monitor BigQuery datasets, tables, and partitioning strategies. Implement error handling, resiliency, and observability mechanisms across pipeline components. Collaborate with DevOps teams to enable automated delivery (CI/CD) for data pipeline components. Required Skills: 5+ years of hands-on experience in Data Engineering or Software Engineering . Proficiency in Python and SQL . Good understanding of Java (for reading or modifying codebases). Experience building ETL pipelines with Apache Beam and Google Cloud Dataflow . Hands-on experience with Apache Kafka for stream processing. Solid understanding of BigQuery and data modeling on GCP. Experience with GCP services (Cloud Storage, Pub/Sub, Cloud Compose, etc.). Good to Have: Experience building reusable ETL libraries or framework components. Knowledge of data governance, data quality checks, and pipeline observability. Familiarity with Apache Airflow or Cloud Composer for orchestration. Exposure to CI/CD practices in a cloud-native environment (Docker, Terraform, etc.). Tech stack : Python, SQL, Java, GCP (BigQuery, Pub/Sub, Cloud Storage, Cloud Compose, Dataflow), Apache Beam, Apache Kafka, Apache Airflow, CI/CD (Docker, Terraform)
Posted 2 months ago
3.0 - 8.0 years
0 - 0 Lacs
hyderabad
Work from Office
Hiring for GCP Cloud Engineer , GCP Data Engineer We are Looking for 3+ years of Experience Skills - Airflow , GCP Cloud , Hadoop , SQL , ETL , Python , Big Query We are Looking for Immediate Joiners ( 15 - 30 Days )
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
42191 Jobs | Dublin
Wipro
20399 Jobs | Bengaluru
Accenture in India
18439 Jobs | Dublin 2
EY
16839 Jobs | London
Uplers
12252 Jobs | Ahmedabad
Amazon
10965 Jobs | Seattle,WA
Accenture services Pvt Ltd
10573 Jobs |
Bajaj Finserv
10403 Jobs |
Oracle
9913 Jobs | Redwood City
IBM
9883 Jobs | Armonk