Jobs
Interviews

6633 Databricks Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

15 - 30 Lacs

Dehradun, Uttarakhand, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 day ago

Apply

4.0 years

15 - 30 Lacs

Mysore, Karnataka, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 day ago

Apply

4.0 years

15 - 30 Lacs

Thiruvananthapuram, Kerala, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 day ago

Apply

4.0 years

15 - 30 Lacs

Vijayawada, Andhra Pradesh, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 day ago

Apply

4.0 years

15 - 30 Lacs

Patna, Bihar, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 day ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Position: Devops Engineer Location: Pune Duration: Contract to Hire Job Description: • Design, implement, and manage cloud infrastructure using Terraform on Azure (and AWS where required). • Automate and orchestrate Azure infrastructure components with a focus on scalability, security, and cost optimization. • Leverage Azure Data Services such as Data Factory, Synapse, and Databricks for cloud data platform tasks. • Optimize and manage database workloads with SQL/PLSQL and query optimization techniques. • Implement and maintain CI/CD pipelines using tools such as Azure DevOps and GitHub Actions. • Manage and support multi-cloud environments, ensuring seamless operations and integration. • Troubleshoot infrastructure and application issues across cloud platforms with effective scripting and automation. • Drive adoption of IaC practices and contribute to continuous improvement of DevOps workflows.

Posted 1 day ago

Apply

2.0 - 5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Title: Azure Data Engineer Experience: 2-5 Years About the Company: EY is a leading global professional services firm offering a broad range of services in assurance, tax, transaction, and advisory services. We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. Your Key Responsibilities Develop & deploy azure databricks in a cloud environment using Azure Cloud services ETL design, development, and deployment to Cloud Service Interact with Onshore, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills And Attributes For Success 3 to 5 years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL, and data warehouse solutions Extensive hands-on experience implementing data migration and data processing using Azure services: Databricks, ADLS, Azure Data Factory, Azure Functions, Synapse/DW, Azure SQL DB, Azure Data Catalog, Cosmo Db etc Hands on experience on spark Hands on experience in programming like python/scala Well versed in DevOps and CI/CD deployments Must have hands on experience in SQL and procedural SQL languages Strong analytical skills and enjoys solving complex technical problems To qualify for the role, you must have Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Strong analytical skills and enjoys solving complex technical problems Excellent debugging and optimization skills Experience in Enterprise grade solution implementations & in converting business problems/challenges to technical solutions considering security, performance, scalability etc Excellent communicator (written and verbal formal and informal). Participate in all aspects of solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Client management skills Education: BS/MS degree in Computer Science, Engineering, or a related subject is required. EY is committed to providing equal opportunities to all candidates. We welcome and encourage applications from candidates with diverse experiences and backgrounds. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 1 day ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Position: We are conducting an in-person hiring drive for the position of Data Engineer (Azure Data Bricks) in Pune & Bengaluru 2nd August 2025. Interview Location is mentioned below: Pune: Persistent Systems,– 9a, Aryabhata-Pingala, 12, Kashibai Khilare Path, Marg, Erandwane, Pune, Maharashtra 411004. Bangalore: Persistent Systems, The Cube at Karle Town Center Rd, Dada Mastan Layout, Manayata Tech Park, Nagavara, Bengaluru, Karnataka 560024. We are looking for an experienced Azure Data Engineer to join our growing team. The ideal candidate will have a strong background in working with Azure Databricks, DBT, Python/PySpark, SQL. You will work closely with our engineers and business teams to ensure optimal performance, scalability, and availability of our data pipelines. Role: Data Engineer (Azure Data Bricks) Job Location: Pune & Bengaluru Experience: 4+ Years Job Type: Full Time Employment What You'll Do: Design and implement complex, scalable data pipelines for ingestion, processing, and transformation using Azure technologies. Collaborate with Architects, Data Analysts, and Business Analysts to understand data requirements and develop efficient workflows. Develop and manage data storage solutions including Azure SQL Database, Data Lake, and Blob Storage. Leverage Azure Data Factory and other cloud-native tools to build and maintain ETL processes. Conduct unit testing and ensure quality of data pipelines; mentor junior engineers and review their deliverables. Monitor pipeline performance, troubleshoot issues, and provide regular status updates. Optimize data workflows for performance and cost-efficiency; implement automation to reduce manual effort. Expertise You'll Bring: Strong experience with Azure and Databricks Experience with Python /Pyspark Experience in SQL Database. Good to have experience in DBT and Dremio Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”

Posted 1 day ago

Apply

2.0 - 5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Title: Azure Data Engineer Experience: 2-5 Years About the Company: EY is a leading global professional services firm offering a broad range of services in assurance, tax, transaction, and advisory services. We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. Your Key Responsibilities Develop & deploy azure databricks in a cloud environment using Azure Cloud services ETL design, development, and deployment to Cloud Service Interact with Onshore, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills And Attributes For Success 3 to 5 years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL, and data warehouse solutions Extensive hands-on experience implementing data migration and data processing using Azure services: Databricks, ADLS, Azure Data Factory, Azure Functions, Synapse/DW, Azure SQL DB, Azure Data Catalog, Cosmo Db etc Hands on experience on spark Hands on experience in programming like python/scala Well versed in DevOps and CI/CD deployments Must have hands on experience in SQL and procedural SQL languages Strong analytical skills and enjoys solving complex technical problems To qualify for the role, you must have Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Strong analytical skills and enjoys solving complex technical problems Excellent debugging and optimization skills Experience in Enterprise grade solution implementations & in converting business problems/challenges to technical solutions considering security, performance, scalability etc Excellent communicator (written and verbal formal and informal). Participate in all aspects of solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Client management skills Education: BS/MS degree in Computer Science, Engineering, or a related subject is required. EY is committed to providing equal opportunities to all candidates. We welcome and encourage applications from candidates with diverse experiences and backgrounds. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Greetings ! One our our client TOP MNC Giant looking for Data Scientist Important Notes: Only person who can join immediately or within 7 days ONLY APPLY Base Locations: Gurgaon and Bengaluru (hybrid setup 3 days work from office). Role: Data Scientist Exp: 4 to 8 Years Immediate Joiners Only Skills (must have)  Bachelor’s or master’s degree in computer science, Data Science, Engineering, or a related field.  Strong programming skills in languages such as Python, SQL etc.  Experience in developing and deploying AI/ML and deep learning solutions with libraries and frameworks, such as Scikit-learn, TensorFlow, PyTorch etc.  Experience in ETL and Datawarehouse tools such as Azure Data Factory,Azur e Data Lake or Databricks etc.  Knowledge of math, probability, and statistics.  Familiarity with a variety of ML algorithms.  Good experience in cloud infrastructure such as Azure (Preferred), AWS/GCP  Exposure to Gen AI, Vector DB, LLM (Large language Model) Skills (good to have) Experience in Flask/Django, Streamlit is a bonus Experience with MLOps: MLFlow, Kubeflow, CI/CD Pipeline etc. Good to have experience in Docker, Kubernetes etc  Collaborate with software engineers, business stake holders and/or domain experts to translate business requirements into product features, tools, projects, AI/ML, NLP/NLU and deep learning solutions.  Develop, implement, and deploy AI/ML solutions.  Preprocess and analyze large datasets to identify patterns, trends, and insights.  Evaluate, validate, and optimize AI/ML models to ensure their accuracy, efficiency, and generalizability.  Deploy applications and AI/ML model into cloud environment such as AWS/Azure/GCP etc.  Monitor and maintain the performance of AI/ML models in production environments, identifying opportunities for improvement and updating models as needed.  Document AI/ML model development processes, results, and lessons learned to facilitate knowledge sharing and continuous improvement. INTERESTED CANDIDATES PERFECT MATCH TO THE JD AND WHO CAN JOIN ASAP ONLY DO APPLY ALONG WITH BELOW MENTIONED DETAILS : Total exp : Relevant exp in Data Scientist : Applying for Gurgaon and Bengaluru : Open for Hybrid : Current CTC : Expected CTC : Can join ASAP : Will call you once we receive your updated profile along with above mentioned details. Thanks, Venkat Solti solti.v@anlage.co.in

Posted 1 day ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description: AWS Data Engineer (Databricks Focus) Position Overview: We are seeking a dynamic and innovative AWS Data Engineer with strong Databricks expertise to join our growing team. The ideal candidate will have a balanced mix of data engineering proficiency and application development skills, ready to own tasks, innovate solutions, and proactively drive technical advancements. Key Responsibilities: Design, develop, and optimize data pipelines using Databricks (PySpark) to manage and transform data efficiently. Implement APIs leveraging AWS AppSync and Lambda functions to interact effectively with AWS Neptune and AWS OpenSearch. Collaborate closely with front-end developers (React on AWS CloudFront) and cross-functional teams to enhance the overall application architecture. Ensure adherence to best practices around AWS security services, particularly IAM, ensuring secure and efficient application development. Proactively research, recommend, and implement innovative solutions to enhance performance, scalability, and reliability of data solutions. Own assigned tasks and projects end-to-end, demonstrating autonomy and accountability. Qualifications: Proven 10year of experience in Databricks and PySpark for building scalable data pipelines. Experience with AWS Neptune and AWS OpenSearch. Solution-level understanding of AWS AppSync. Solid understanding of AWS ecosystem services including Lambda, IAM, and CloudFront. Strong capabilities in API development and integration within cloud architectures. Experience or familiarity with React.js-based front-end applications is beneficial. Excellent analytical, problem-solving, and debugging skills. Ability to independently research and develop innovative solutions. Strong communication skills with a proactive mindset and ability to collaborate effectively across teams. Preferred Qualifications: AWS Certification (Solutions Architect Associate, Developer Associate, Data Analytics Specialty) preferred. Experience with graph databases and search platforms in a production environment.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Role: Data Engineer Location: Coimbatore | Hyderabad | Remote Databricks environment using PySpark 1. 5~8 years as hands on Data Engineers. 2. Good Data Analysis skills is a must. 3. Hands on experience in designing, developing, and maintaining scalable data pipelines. 4. Implementing ETL processes, and ensuring data quality and performance within the Databricks environment using PySpark 5. Experience on Data warehousing concepts, data modelling and metadata management is a plus. 6. Display Good communication skills, especially customer interfacing skills.

Posted 1 day ago

Apply

4.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

JOB RESPONSIBILITIES: The job entails you to work with our clients and partners to design, define, implement, roll-out, and improve Data Quality that leverage various tools available in the market for example: Informatica IDQ or SAP DQ or SAP MDG or Collibra DQ or Talend DQ or Custom DQ Solution and/or other leading platform for the client’s business benefit. The ideal candidate will be responsible for ensuring the accuracy, completeness, consistency, and reliability of data across systems. You will work closely with data engineers, analysts, and business stakeholders to define and implement data quality frameworks and tools. As part of your role and responsibilities, you will get the opportunity to be involved in the entire business development life-cycle:  Meet with business individuals to gather information and analyze existing business processes, determine and document gaps and areas for improvement, prepare requirements documents, functional design documents, etc. To summarize, work with the project stakeholders to identify business needs and gather requirements for the following areas: Data Quality and/or Data Governance or Master Data  Follow up of the implementation by conducting training sessions, planning and executing technical and functional transition to support team.  Ability to grasp business and technical concepts and transform them into creative, lean, and smart data management solutions.  Development and implementation of Data Quality solution in any of the above leading platform-based Enterprise Data Management Solutions o Assess and improve data quality across multiple systems and domains. o Define and implement data quality rules, metrics, and dashboards. o Perform data profiling, cleansing, and validation using industry-standard tools. o Collaborate with data stewards and business units to resolve data issues. o Develop and maintain data quality documentation and standards. o Support data governance initiatives and master data management (MDM). o Recommend and implement data quality tools and automation strategies. o Conduct root cause analysis of data quality issues and propose remediation plans. o Implement/Take advantage of AI to improve/automate Data Quality solution o Leveraging SAP MDG/ECCs experience the candidate is able to deep dive to do root cause analysis for assigned usecases. Also able to work with Azure data lake (via dataBricks) using SQL/Python. o Data Model (Conceptual and Physical) will be needed to be identified and built that provides automated mechanism to monitor on going DQ issues. Multiple workshops may also be needed to work through various options and identifying the one that is most efficient and effective o Works with business (Data Owners/Data Stewards) to profile data for exposing patterns indicating data quality issues. Also is able to identify impact to specific CDEs deemed important for each individual business. o Identifies financial impacts of Data Quality Issue. Also is able to identify business benefit (quantitative/qualitative) from a remediation standpoint along with managing implementation timelines. o Schedules regular working groups with business that have identified DQ issues and ensures progression for RCA/Remediation or for presenting in DGFs o Identifies business DQ rules basis which KPIs/Measures are stood up that feed into the dashboarding/workflows for BAU monitoring. Red flags are raised and investigated o Understanding of Data Quality value chain, starting with Critical Data Element concepts, Data Quality Issues, Data Quality KPIs/Measures is needed. Also has experience owing and executing Data Quality Issue assessments to aid improvements to operational process and BAU initiatives o Highlights risk/hidden DQ issues to Lead/Manager for further guidance/escalation o Communication skills are important in this role as this is outward facing and focus has to be on clearly articulation messages. o Support designing, building and deployment of data quality dashboards via PowerBI o Determines escalation paths and constructs workflow and alerts which notify process and data owners of unresolved data quality issues o Collaborates with IT & analytics teams to drive innovation (AI, ML, cognitive science etc.) o Works with business functions and projects to create data quality improvement plans o Sets targets for data improvements / maturity. Monitors and intervenes when sufficient progress is not being made o Supports initiatives which are driving data clean-up of existing data landscape JOB REQUIREMENTS: i. Education or Certifications:  Bachelor's / Master's degree in engineering/technology/other related degrees.  Relevant Professional level certifications from Informatica or SAP or Collibra or Talend or any other leading platform/tools  Relevant certifications from DAMA, EDM Council and CMMI-DMM will be a bonus ii. Work Experience:  You have 4-10 years of relevant experience within the Data & Analytics area with major experience around data management areas: ideally in Data Quality (DQ) and/or Data Governance or Master Data using relevant tools  You have an in-depth knowledge of Data Quality and Data Governance concepts, approaches, methodologies and tools  Client-facing Consulting experience will be considered a plus iii. Technical and Functional Skills:  Hands-on experience in any of the above DQ tools in the area of enterprise Data Management preferably in complex and diverse systems environments  Exposure to concepts of data quality – data lifecycle, data profiling, data quality remediation(cleansing, parsing, standardization, enrichment using 3 rd party plugins etc.) etc.  Strong understanding of data quality best practices, concepts, data quality management frameworks and data quality dimensions/KPIs  Deep knowledge on SQL and stored procedure  Should have strong knowledge on Master Data, Data Governance, Data Security  Prefer to have domain knowledge on SAP Finance modules  Good to have hands on experience on AI use cases on Data Quality or Data Management areas  Prefer to have the concepts and hands on experience of master data management – matching, merging, creation of golden records for master data entities  Strong soft skills like inter-personal, team and communication skills (both verbal and written)

Posted 1 day ago

Apply

6.0 years

0 Lacs

Udaipur, Rajasthan, India

On-site

Role: Senior Data Engineer Experience: 4-6 Yrs Location: Udaipur , Jaipur Job Description: We are looking for a highly skilled and experienced Data Engineer with 4–6 years of hands-on experience in designing and implementing robust, scalable data pipelines and infrastructure. The ideal candidate will be proficient in SQL and Python and have a strong understanding of modern data engineering practices. You will play a key role in building and optimizing data systems, enabling data accessibility and analytics across the organization, and collaborating closely with cross-functional teams including Data Science, Product, and Engineering. Key Responsibilities: Design, develop, and maintain scalable ETL/ELT data pipelines using SQL and Python Collaborate with data analysts, data scientists, and product teams to understand data needs Optimize queries and data models for performance and reliability Integrate data from various sources, including APIs, internal databases, and third-party systems Monitor and troubleshoot data pipelines to ensure data quality and integrity Document processes, data flows, and system architecture Participate in code reviews and contribute to a culture of continuous improvement Required Skills: 4–6 years of experience in data engineering, data architecture, or backend development with a focus on data Strong command of SQL for data transformation and performance tuning Experience with Python (e.g., pandas, Spark, ADF) Solid understanding of ETL/ELT processes and data pipeline orchestration Proficiency with RDBMS (e.g., PostgreSQL, MySQL, SQL Server) Experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) Familiarity with version control (Git), CI/CD workflows, and containerized environments (Docker, Kubernetes) Basic Programming Skills Excellent problem-solving skills and a passion for clean, efficient data systems Preferred Skills: Experience with cloud platforms (AWS, Azure, GCP) and services like S3, Glue, Dataflow, etc. Exposure to enterprise solutions (e.g., Databricks, Synapse) Knowledge of big data technologies (e.g., Spark, Kafka, Hadoop) Background in real-time data streaming and event-driven architectures Understanding of data governance, security, and compliance best practices Prior experience working in agile development environment Educational Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. Visit us: https://kadellabs.com/ https://in.linkedin.com/company/kadel-labs https://www.glassdoor.co.in/Overview/Working-at-Kadel-Labs-EI_IE4991279.11,21.htm

Posted 1 day ago

Apply

6.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

JOB DESCRIPTION: DATA ENGINEER (Databricks & AWS) Overview: As a Data Engineer, you will work with multiple teams to deliver solutions on the AWS Cloud using core cloud data engineering tools such as Databricks on AWS, AWS Glue, Amazon Redshift, Athena, and other Big Data-related technologies. This role focuses on building the next generation of application-level data platforms and improving recent implementations. Hands-on experience with Apache Spark (PySpark, SparkSQL), Delta Lake, Iceberg, and Databricks is essential. Locations: Jaipur, Pune, Hyderabad, Bangalore, Noida. Responsibilities: • Define, design, develop, and test software components/applications using AWS-native data services: Databricks on AWS, AWS Glue, Amazon S3, Amazon Redshift, Athena, AWS Lambda, Secrets Manager • Build and maintain ETL/ELT pipelines for both batch and streaming data. • Work with structured and unstructured datasets at scale. • Apply Data Modeling principles and advanced SQL techniques. • Implement and manage pipelines using Apache Spark (PySpark, SparkSQL) and Delta Lake/Iceberg formats. • Collaborate with product teams to understand requirements and deliver optimized data solutions. • Utilize CI/CD pipelines with DBX and AWS for continuous delivery and deployment of Databricks code. • Work independently with minimal supervision and strong ownership of deliverables. Must Have: • 6+ years of experience in Data Engineering on AWS Cloud. • Hands-on expertise in: o Apache Spark (PySpark, SparkSQL) o Delta Lake / Iceberg formats o Databricks on AWS o AWS Glue, Amazon Athena, Amazon Redshift • Strong SQL skills and performance tuning experience on large datasets. • Good understanding of CI/CD pipelines, especially using DBX and AWS tools. • Experience with environment setup, cluster management, user roles, and authentication in Databricks. • Certified as a Databricks Certified Data Engineer – Professional (mandatory). Good To Have: • Experience migrating ETL pipelines from on-premise or other clouds to AWS Databricks. • Experience with Databricks ML or Spark 3.x upgrades. • Familiarity with Airflow, Step Functions, or other orchestration tools. • Experience integrating Databricks with AWS services in a secured, production-ready environment. • Experience with monitoring and cost optimization in AWS. Key Skills: • Languages: Python, SQL, PySpark • Big Data Tools: Apache Spark, Delta Lake, Iceberg • Databricks on AWS • AWS Services: AWS Glue, Athena, Redshift, Lambda, S3, Secrets Manager • Version Control & CI/CD: Git, DBX, AWS CodePipeline/CodeBuild • Other: Data Modeling, ETL Methodology, Performance Optimization

Posted 1 day ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Company Papigen is a fast-growing global technology services company, delivering innovative digital solutions through deep industry experience and cutting-edge expertise. We specialize in technology transformation, enterprise modernization, and dynamic areas like Cloud, Big Data, Java, React, DevOps, and more. Our client-centric approach combines consulting, engineering, and data science to help businesses evolve and scale efficiently. About The Role We are seeking an experienced Senior Data QA Analyst to support data integration, transformation, and reporting validation for enterprise-scale systems. This role involves close collaboration with data engineers, business analysts, and stakeholders to ensure the quality, accuracy, and reliability of data workflows, especially in Azure Data Bricks and ETL pipelines . Key Responsibilities Test Planning and Execution: Collaborate with Business Analysts and Data Engineers to understand requirements and translate them into test scenarios and test case Develop and execute comprehensive test plans and test scripts for data validation Log and manage defects using tools like Azure DevOps Support UAT and post-go-live smoke testing Data Integration Validation Understand data architecture and workflows, including ETL processes and data movement Write and execute complex SQL queries to validate data accuracy, completeness, and consistency Ensure correctness of data transformations and mappings based on business logic Report Testing Validate the structure, metrics, and content of BI reports Perform cross-checks of report outputs against source systems Ensure reports reflect accurate calculations and align with business requirements Required Skills & Experience Bachelor’s degree in IT, Computer Science, MIS, or related field 8+ years of experience in QA, especially in data validation or data warehouse testing Strong hands-on experience with SQL and data analysis Proven experience working with Azure Data Bricks, Python, and PySpark (preferred) Familiarity with data models like Data Marts, EDW, and Operational Data Stores Excellent understanding of data transformation, mapping logic, and BI validation Experience with test case documentation, defect tracking, and Agile methodologies Strong verbal and written communication skills, with the ability to work in a cross-functional environment Benefit And Perks Opportunity to work with leading global clients Exposure to modern technology stacks and tools Supportive and collaborative team environment Continuous learning and career development opportunities Skills: etl,agile methodologies,test case design,agile,databricks,data integration,operational data stores,azure data bricks,test planning,sql,testing,edw,defect tracking,data validation,python,etl testing,pyspark,data analysis,data marts,test case documentation,data warehousing

Posted 1 day ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: Data Engineer Location: Noida Experience: 3+ years Job Description: We are seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with a focus on PySpark, Python, and SQL. Experience with Azure Databricks is a plus. Key Responsibilities: Design, develop, and maintain scalable data pipelines and systems. Work closely with data scientists and analysts to ensure data quality and availability. Implement data integration and transformation processes using PySpark and Python. Optimize and maintain SQL databases and queries. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Monitor and troubleshoot data pipeline issues to ensure data integrity and performance. Required Skills and Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 3+ years of experience in data engineering. Proficiency in PySpark, Python, and SQL. Experience with Azure Databricks is a plus. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Preferred Qualifications: Experience with cloud platforms such as Azure, AWS, or Google Cloud. Knowledge of data warehousing concepts and technologies. Familiarity with ETL tools and processes. How to Apply: Apart from Easy apply on Linkedin also Click on this link 🡪https://forms.office.com/r/N0nYycJ36P #DataEngineer #Hiring #JobOpening #PySpark #Python #SQL #AzureDatabricks #TechJobs #DataEngineering #CareerOpportunity

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

Cowbell is signaling a new era in cyber insurance by harnessing technology and data to provide small and medium-sized enterprises (SMEs) with advanced warning of cyber risk exposures bundled with cyber insurance coverage adaptable to the threats of today and tomorrow. Championing adaptive insurance, Cowbell follows policyholders" cyber risk exposures as they evolve through continuous risk assessment and continuous underwriting. In its unique AI-based approach to risk selection and pricing, Cowbell's underwriting platform, powered by Cowbell Factors, compresses the insurance process from submission to issue to less than 5 minutes. Founded in 2019 and based in the San Francisco Bay Area, Cowbell has rapidly grown, now operating across the U.S., Canada, U.K., and India. This growth was recently bolstered by a successful Series C fundraising round of $60 million from Zurich Insurance. This investment not only underscores the confidence in Cowbell's mission but also accelerates our capacity to revolutionize cyber insurance on a global scale. With the backing of over 25 prominent reinsurance partners, Cowbell is poised to redefine how SMEs navigate the evolving landscape of cyber threats. In support of business objectives, we are actively looking for an ambitious person, who is not afraid of hard-work and embraces ambiguity as it comes to join our Information Security Team as a Sr. Developer, Application Security. The InfoSec team drives security, privacy, and compliance improvements to reduce risk by building out key security programs. We enable our colleagues to keep the company secure and support our customers" security journey with tried and true best practices. We are a Java, Python, and React shop combined with world-class cloud infrastructure such as AWS & Snowflake. Balancing proper security while enabling execution speed for our colleagues is our ultimate goal. It's challenging and rewarding! If you are up for the challenge, come join us. You will be instrumental in curing security defects in code, burning down any new and existing vulnerabilities. You can fix the code yourself and continuous patching is your north star. You will be the champion for safeguards and standards that will keep our code secure and reduce the introduction of new vulnerabilities. Partner and collaborate with internal stakeholders in assisting with the overall security posture with an emphasis on the Engineering and Operations/IT areas. Work across engineering, product and business systems teams to enhance and evangelize security in applications (& infrastructure). Research emerging technologies and maintain awareness of current security risks in support of security enhancement and development efforts. Develop and maintain application scanning solutions to inform stakeholders of security weaknesses & vulnerabilities. Review outstanding vulnerabilities with product teams and assist in remediation efforts to reduce risk. Bachelor's degree in computer science or another STEM discipline and 8 to 10+ years of professional experience in security software development. Majority of prior experience as a Security Engineer focused on remediation of security vulnerabilities and defects in Java and Python. Must have prior in-depth demonstrable experience developing in JAVA and Python; Basically you are developer first and a security engineer second. Applicants that do not have this experience will not be considered. Experience developing in, and securing, Javascript and React a plus. Experience securing integrations and code that utilizes Elasticsearch, Snowflake, Databricks, RDS a big plus. Detail-oriented with problem-solving, communication, and analytical skills. Expert understanding of CVE and CVSS scoring and how to utilize this data for validation, prioritization, and remediation. Excellent understanding and utilization of OWASP. Demonstrated ability to secure API; Techniques, patterns, will be assessed. Experience designing and implementing application security solutions for web and or mobile applications. Experience developing and reporting vulnerability metrics as well as articulating how to reproduce and resolve those security defects. Experienced in application penetration testing; and understanding of remediation techniques for common misconfigurations and vulnerabilities. Demonstrable experience in understanding patching and library upgrade paths including interdependencies. Familiarity with CI/CD tools. Previous admin experience in CI/CD is not required but a big plus. Capability to deploy, provide maintenance for, and operationalize scanning solutions. Hands-on ability to conduct scans across application repositories and infrastructure. Must be willing to work extended hours and weekends as needed. Great at and enjoys documenting solutions; creating repeatable instruction for others, operational documentation, developing technical diagrams, and similar artifacts. Preferred Qualifications: You can demonstrate and document threat modeling scenarios using well-known frameworks such as STRIDE. Proficient with penetration testing tools such Burp suite, Metasploit or ZAP. You are already proficient with SAST & SCA tools; proficiency with DAST and/or OAST tool usage and techniques would be even better. As a mentor you also have the experience and desire in providing fellow engineering teams with technical guidance on the impact and priority of security issues and driving remediation. Capability to develop operational process from scratch or improve current processes and procedures through well-thought-out hand-offs, integrations, and automation. Familiarity with multiple security domains such as application security, infrastructure security, network security, incident response, and regulatory compliance and certifications. Understanding of modern endpoint security technologies/concepts. Adept at working with distributed team members. What Cowbell brings to the table: Employee equity plan for all and wealth enablement plan for select customer-facing roles. Comprehensive wellness program, meditation app subscriptions, lunch and learn, book club, happy hours, and much more. Professional development and the opportunity to learn the ins and outs of cyber insurance, cybersecurity as well as continuing to build your professional skills in a team environment. Equal Employment Opportunity: Cowbell is a leading innovator in cyber insurance, dedicated to empowering businesses to always deliver their intended outcomes as the cyber threat landscape evolves. Guided by our core values of TRUE Transparency, Resiliency, Urgency, and Empowerment, we are on a mission to be the gold standard for businesses to understand, manage, and transfer cyber risk. At Cowbell, we foster a collaborative and dynamic work environment where every employee is empowered to contribute and grow. We pride ourselves on our commitment to transparency and resilience, ensuring that we not only meet but exceed industry standards. We are proud to be an equal opportunity employer, promoting a diverse and inclusive workplace where all voices are heard and valued. Our employees enjoy competitive compensation, comprehensive benefits, and continuous opportunities for professional development.,

Posted 1 day ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka

On-site

At Takeda, we are guided by our purpose of creating better health for people and a brighter future for the world. Every corporate function plays a role in making sure we — as a Takeda team — can discover and deliver life-transforming treatments, guided by our commitment to patients, our people and the planet. People join Takeda because they share in our purpose. And they stay because we’re committed to an inclusive, safe and empowering work environment that offers exceptional experiences and opportunities for everyone to pursue their own ambitions. Job ID R0159362 Date posted 07/31/2025 Location Bengaluru, Karnataka I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’sPrivacy Noticeand Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description Job Overview: We are seeking a highly skilled and experienced Data Management Lead with expertise in data management principles, cloud technologies (AWS, Databricks, Informatica, etc.), software engineering practices, and system design and architecture. The ideal candidate will also have leadership experience managing a team of ~10 individuals and driving best practices in data management and engineering. Responsibilities: Lead and manage a data management team of ~10 members, fostering growth and collaboration. Oversee the design, implementation, and maintenance of data infrastructure and pipelines. Ensure adherence to data management principles, including data quality, governance, and security. Utilize cloud platforms like AWS, and tools such as Databricks & Informatica for data processing and storage solutions. Apply software engineering best practices, including DevOps, Infrastructure as Code (IaC), and testing automation, to streamline workflows. Design scalable and robust system architectures aligned with business needs. Collaborate with cross-functional stakeholders to ensure the seamless integration of data systems. Monitor and optimize system performance, reliability, and cost-effectiveness. Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related fields. 8+ years of experience in data management, with a strong understanding of cloud-based technologies like AWS, Databricks, and Informatica. Expertise in software engineering best practices such as IaC, DevOps pipelines, and comprehensive testing methods. Experience in system design & architecture, including distributed systems and integration patterns. Proven ability to lead and manage a team (~10 people), fostering an inclusive and high-performance culture. Strong problem-solving, communication, and stakeholder management skills. Strong expertise in data integration, data modeling, and modern database technologies (Graph, SQL, No-SQL) and AWS cloud technologies Extensive experience in DBA, schema design & dimensional modeling, and SQL optimization. Excellent written and verbal communication skills, with the ability to collaborate effectively with cross-functional teams. Preferred Skills: Certifications in AWS, DevOps, Data Engineering, or related technologies. Familiarity with Agile methodologies and tools. Knowledge of modern data governance frameworks and strategies. Locations IND - Bengaluru Worker Type Employee Worker Sub-Type Regular Time Type Full time

Posted 1 day ago

Apply

3.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Category: Testing/Quality Assurance Main location: India, Karnataka, Bangalore Position ID: J0725-1442 Employment Type: Full Time Position Description: Position Description Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Job Title: ETL Testing Engineer Position: Senior test engineer Experience: 3-9 Years Category: Quality assurance/Software Testing. Shift: 1-10 pm/UK Shift Main location: Chennai/Bangalore. Position ID: J0725-1442 Employment Type: Full Time Position Description: We are looking for an experienced DataStage tester to join our team. The ideal candidate should be passionate about coding and testing scalable and high-performance applications Your future duties and responsibilities: Develop and execute ETL test cases to validate data extraction, transformation, and loading processes. Write complex SQL queries to verify data integrity, consistency, and correctness across source and target systems. Automate ETL testing workflows using Python, PyTest, or other testing frameworks. Perform data reconciliation, schema validation, and data quality checks. Identify and report data anomalies, performance bottlenecks, and defects. Work closely with Data Engineers, Analysts, and Business Teams to understand data requirements. Design and maintain test data sets for validation. Implement CI/CD pipelines for automated ETL testing (Jenkins, GitLab CI, etc.). Document test results, defects, and validation reports. Required qualifications to be successful in this role: ETL Testing: Strong experience in testing Informatica, Talend, SSIS, Databricks, or similar ETL tools. SQL: Advanced SQL skills (joins, aggregations, subqueries, stored procedures). Python: Proficiency in Python for test automation (Pandas, PySpark, PyTest). Databases: Hands-on experience with RDBMS (Oracle, SQL Server, PostgreSQL) & NoSQL (MongoDB, Cassandra). Big Data Testing (Good to Have): Hadoop, Hive, Spark, Kafka. Testing Tools: Knowledge of Selenium, Airflow, Great Expectations, or similar frameworks. Version Control: Git, GitHub/GitLab. CI/CD: Jenkins, Azure DevOps, or similar. Soft Skills: Strong analytical and problem-solving skills. Ability to work in Agile/Scrum environments. Good communication skills for cross-functional collaboration. Preferred Qualifications: Experience with cloud platforms (AWS, Azure). Knowledge of Data Warehousing concepts (Star Schema, Snowflake Schema). Certification in ETL Testing, SQL, or Python is a plus. Skills: Data Warehousing MS SQL Server Python What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

Pune, Maharashtra

On-site

DataPune Posted On 31 Jul 2025 End Date 31 Dec 2025 Required Experience 4 - 8 Years Basic Section Grade Role Senior Software Engineer Employment Type Full Time Employee Category Organisational Group Company NewVision Company Name New Vision Softcom & Consultancy Pvt. Ltd Function Business Units (BU) Department/Practice Data Department/Practice Data Engineering Region APAC Country India Base Office Location Pune Working Model Hybrid Weekly Off Pune Office Standard State Maharashtra Skills Skill AZURE DATABRICKS Highest Education GRADUATION/EQUIVALENT COURSE CERTIFICATION DP-201: DESIGNING AN AZURE DATA SOLUTION DP-203T00: DATA ENGINEERING ON MICROSOFT AZURE Working Language ENGLISH Job Description Position Summary: We are seeking a talented Databricks Data Engineer with a strong background in data engineering to join our team. You will play a key role in designing, building, and maintaining data pipelines using a variety of technologies, with a focus on the Microsoft Azure cloud platform. Responsibilities: Design, develop, and implement data pipelines using Azure Data Factory (ADF) or other orchestration tools. Write efficient SQL queries to extract, transform, and load (ETL) data from various sources into Azure Synapse Analytics. Utilize PySpark and Python for complex data processing tasks on large datasets within Azure Databricks. Collaborate with data analysts to understand data requirements and ensure data quality. Hands-on experience in designing and developing Datalakes and Warehouses Implement data governance practices to ensure data security and compliance. Monitor and maintain data pipelines for optimal performance and troubleshoot any issues. Develop and maintain unit tests for data pipeline code. Work collaboratively with other engineers and data professionals in an Agile development environment. Preferred Skills & Experience: Good knowledge of PySpark & working knowledge of Python Full stack Azure Data Engineering skills (Azure Data Factory, DataBricks and Synapse Analytics) Experience with large dataset handling Hands-on experience in designing and developing Datalakes and Warehouses New Vision is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees

Posted 1 day ago

Apply

6.0 - 10.0 years

0 Lacs

delhi

On-site

The client, a leading MNC, specializes in technology consulting and digital solutions for global enterprises. With a vast workforce of over 145,000 professionals across 90+ countries, they cater to 1100+ clients in various industries. The company offers a comprehensive range of services including consulting, IT solutions, enterprise applications, business processes, engineering, network services, customer experience, AI & analytics, and cloud infrastructure services. Notably, they have been recognized for their commitment to sustainability with the Terra Carta Seal, showcasing their dedication to building a climate and nature-positive future. As a Data Engineer with a minimum of 6 years of experience, you will be responsible for constructing and managing data pipelines. The ideal candidate should possess expertise in Databricks, AWS/Azure, and data storage technologies such as databases and distributed file systems. Familiarity with the Spark framework is essential, and prior experience in the retail sector would be advantageous. Key Responsibilities: - Design, develop, and maintain scalable ETL pipelines for processing large data volumes from diverse sources. - Implement and oversee data integration solutions utilizing tools like Databricks, Snowflake, and other relevant technologies. - Develop and optimize data models and schemas to support analytical and reporting requirements. - Write efficient and sustainable Python code for data processing and transformations. - Utilize Apache Spark for distributed data processing and large-scale analytics. - Translate business needs into technical solutions. - Ensure data quality and integrity through rigorous unit testing. - Collaborate with cross-functional teams to integrate data pipelines with other systems. Technical Requirements: - Proficiency in Databricks for data integration and processing. - Experience with ETL tools and processes. - Strong Python programming skills with Apache Spark, emphasizing data processing and automation. - Solid SQL skills and familiarity with relational databases. - Understanding of data warehousing concepts and best practices. - Exposure to cloud platforms such as AWS and Azure. - Hands-on troubleshooting ability and problem-solving skills for complex data issues. - Practical experience with Snowflake.,

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

Chubb is a world-renowned insurance leader with operations spanning across 54 countries and territories, offering a wide range of commercial and personal insurance solutions. Known for its extensive product portfolio, robust distribution network, exceptional financial stability, and global presence, Chubb is committed to providing top-notch services to its diverse clientele. The parent company, Chubb Limited, is publicly listed on the New York Stock Exchange (NYSE: CB) and is a constituent of the S&P 500 index, boasting a workforce of around 43,000 individuals worldwide. For more information, visit www.chubb.com. Chubb India is embarking on an exciting digital transformation journey fueled by a focus on engineering excellence and analytics. The company takes pride in being officially certified as a Great Place to Work for the third consecutive year, underscoring its culture that nurtures innovation, growth, and collaboration. With a talented team of over 2500 professionals, Chubb India promotes a startup mindset that encourages diverse perspectives, teamwork, and a solution-oriented approach. The organization is dedicated to honing expertise in engineering, analytics, and automation, empowering its teams to thrive in the ever-evolving digital landscape. As a Full Stack Data Scientist within the Advanced Analytics team at Chubb, you will play a pivotal role in developing cutting-edge data-driven solutions using state-of-the-art machine learning and AI technologies. This technical position involves leveraging AI and machine learning techniques to automate underwriting processes, enhance claims outcomes, and provide innovative risk solutions. Ideal candidates for this role possess a solid educational background in computer science, data science, statistics, applied mathematics, or related fields, coupled with a penchant for solving complex problems through innovative thinking while maintaining a keen focus on delivering actionable business insights. You should be proficient in utilizing a diverse set of tools, strategies, machine learning algorithms, and programming languages to address a variety of challenges. Key Responsibilities: - Collaborate with global business partners to identify analysis requirements, manage deliverables, present results, and implement models. - Leverage a wide range of machine learning, text and image AI models to extract meaningful features from structured and unstructured data. - Develop and deploy scalable and efficient machine learning models to automate processes, gain insights, and facilitate data-driven decision-making. - Package and publish codes and solutions in reusable Python formats for seamless integration into CI/CD pipelines and workflows. - Ensure high-quality code that aligns with business objectives, quality standards, and secure web development practices. - Build tools for streamlining the modeling pipeline, sharing knowledge, and implementing real-time monitoring and alerting systems for machine learning solutions. - Establish and maintain automated testing and validation infrastructure, troubleshoot pipelines, and adhere to best practices for versioning, monitoring, and reusability. Qualifications: - Proficiency in ML concepts, supervised/unsupervised learning, ensemble techniques, and various ML models including Random Forest, XGBoost, SVM, etc. - Strong experience with Azure cloud computing, containerization technologies (Docker, Kubernetes), and data science frameworks like Pandas, Numpy, TensorFlow, Keras, PyTorch, and sklearn. - Hands-on experience with DevOps tools such as Git, Jenkins, Sonar, Nexus, along with data pipeline building, debugging, and unit testing practices. - Familiarity with AI/ML applications, Databricks ecosystem, and statistical/mathematical domains. Why Chubb - Join a leading global insurance company with a strong focus on employee experience and a culture that fosters innovation and excellence. - Benefit from a supportive work environment, industry leadership, and opportunities for personal and professional growth. - Embrace a startup-like culture that values speed, agility, ownership, and continuous improvement. - Enjoy comprehensive employee benefits that prioritize health, well-being, learning, and career advancement. Employee Benefits: - Access to savings and investment plans, upskilling opportunities, health and welfare benefits, and a supportive work environment that encourages inclusivity and growth. Join Us: Your contributions are integral to shaping the future at Chubb. If you are passionate about integrity, innovation, and inclusion and ready to make a difference, we invite you to be part of Chubb India's journey. Apply Now: Chubb India Career Page,

Posted 2 days ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

The successful candidate will be responsible for developing and maintaining applications using Python, SQL, Reactjs, and Java. You will also be involved in building and managing data pipelines on platforms such as Databricks, DBT, Snowflake, RSL (Report Specification Language-Geneva), and RDL (Report Definition Language). Your experience with non-functional aspects like performance management, scalability, and availability will be crucial. Additionally, you will collaborate closely with front-office, operations, and finance teams to enhance reporting and analysis for alternative investments. Working with cross-functional teams, you will drive automation, workflow efficiencies, and reporting enhancements. Troubleshooting system issues, implementing enhancements, and ensuring optimal system performance to follow the sun model for end-to-end coverage of applications will also be part of your responsibilities. Qualifications & Experience: - Education: Bachelor's degree in Computer Science, Engineering, or a related field. - Experience: Minimum of 2 years of experience in enterprise software development and production management, preferably within financial services. - Proficiency in at least one programming language - Python, Java, Reactjs, SQL. - Familiarity with alternative investments and their reporting requirements. - Hands-on experience in relational databases and complex query authoring. - Ability to thrive in a fast-paced work environment with quick iterations. - Must be able to work out of our Bangalore office. Preferred Qualifications: - Knowledge of AWS/Azure services. - Previous experience in asset management/private equity domain. This role provides an exciting opportunity to work in a fast-paced, engineering-focused startup environment and contribute to meaningful projects that address complex business challenges. Join our team and become part of a culture that values innovation, collaboration, and excellence. FS Investments: 30 years of leadership in private markets FS Investments is an alternative asset manager focused on delivering attractive returns across private equity, private credit, and real estate. With the recent acquisition of Portfolio Advisors in 2023, FS Investments now manages over $85 billion for institutional and wealth management clients globally. With over 30 years of experience and more than 500 employees across nine global offices, the firm's investment professionals oversee a variety of strategies across private markets and maintain relationships with 300+ sponsors. FS Investments" active partnership model fosters superior market insights and deal flow, informing the underwriting process and contributing to strong returns. FS is an Equal Opportunity Employer. FS Investments does not accept unsolicited resumes from recruiters or search firms. Any resume or referral submitted without a signed agreement is the property of FS Investments, and no fee will be paid.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As an Azure Data Engineer (Databricks Specialist) at CrossAsyst, you will be part of our high-impact Data & AI team, working on critical client-facing projects. With over 5 years of experience, you will utilize your expertise in Azure data services and Databricks to build robust, scalable data pipelines and drive technology innovation for our clients in Pune. Your key responsibilities will include designing, developing, and deploying end-to-end data pipelines using Azure Databricks, Data Factory, and Synapse. You will be responsible for data ingestion, transformation, and wrangling from various sources, optimizing Spark jobs and Databricks notebooks for performance and cost-efficiency. Implementing DevOps best practices for CI/CD, Git integration, and automated testing will be essential in your role. Collaborating with cross-functional teams such as data scientists, architects, and stakeholders, you will design scalable data lakehouse and data warehouse solutions using Delta Lake and Synapse. Ensuring data security, access control, and compliance using Azure-native governance tools will also be a part of your responsibilities. Additionally, you will work closely with data science teams for feature engineering and machine learning workflows within Databricks. Your proactive mindset and strong coding ability in PySpark will be crucial in writing efficient SQL and PySpark code for analytics and transformation tasks. It will also be essential to proactively monitor and troubleshoot data pipelines in production environments. In this role, documenting solution architectures, workflows, and data lineage will contribute to the successful delivery of scalable, secure, and high-performance data solutions. If you are looking to make an impact by driving technology innovation and delivering better and faster outcomes, we welcome you to join our team at CrossAsyst.,

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies