Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 10.0 years
10 - 12 Lacs
Chennai
Work from Office
Databricks developer with deep SQL expertise to support the development of scalable data pipelines and analytics workflows, will work closely with data engineers BIanalysts to prepare clean, query-optimized datasets for reporting and modeling.
Posted 1 week ago
8 - 13 years
11 - 21 Lacs
Bengaluru, Hyderabad, Kolkata
Work from Office
Responsibilities • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have experience in Data Engineering domain. Qualifications we seek in you! Minimum qualifications • Bachelors Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have excellent coding skills either Python or Scala, preferably Python. • Must have experience in Data Engineering domain . • Must have implemented at least 2 project end-to-end in Databricks. • Must have at least experience on databricks which consists of various components as below o Delta lake o dbConnect o db API 2.0 o Databricks workflows orchestration • Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. • Must have good understanding to create complex data pipeline • Must have good knowledge of Data structure & algorithms. • Must be strong in SQL and sprak-sql. • Must have strong performance optimization skills to improve efficiency and reduce cost. • Must have worked on both Batch and streaming data pipeline. • Must have extensive knowledge of Spark and Hive data processing framework. • Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. • Must be strong in writing unit test case and integration test • Must have strong communication skills and have worked on the team of size 5 plus • Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. • Good to have Databricks SQL Endpoint understanding. • Good To have CI/CD experience to build the pipeline for Databricks jobs. • Good to have if worked on migration project to build Unified data platform. • Good to have knowledge of DBT. • Good to have knowledge of docker and Kubernetes.
Posted 2 months ago
5 - 10 years
15 - 25 Lacs
Delhi NCR, Bengaluru, Hyderabad
Hybrid
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Principal Consultant- Databricks Developer AWS! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications • Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have excellent coding skills either Python or Scala, preferably Python. • Must have experience in Data Engineering domain . • Must have implemented at least 2 project end-to-end in Databricks. • Must have at least experience on databricks which consists of various components as below o Delta lake o dbConnect o db API 2.0 o Databricks workflows orchestration • Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. • Must have good understanding to create complex data pipeline • Must have good knowledge of Data structure & algorithms. • Must be strong in SQL and sprak-sql. • Must have strong performance optimization skills to improve efficiency and reduce cost. • Must have worked on both Batch and streaming data pipeline. • Must have extensive knowledge of Spark and Hive data processing framework. • Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. • Must be strong in writing unit test case and integration test • Must have strong communication skills and have worked on the team of size 5 plus • Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. • Good to have Databricks SQL Endpoint understanding. • Good To have CI/CD experience to build the pipeline for Databricks jobs. • Good to have if worked on migration project to build Unified data platform. • Good to have knowledge of DBT. • Good to have knowledge of docker and Kubernetes. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training
Posted 2 months ago
6 - 11 years
15 - 20 Lacs
Chennai, Pune, Bengaluru
Work from Office
Location: Bangalore/Chennai/Pune Role: Databricks Developer Qualification: Any Graduate Experience: 8+ Years ARA's client is a leading IT solutions provider, offering Applications, Business Process Outsourcing (BPO) and Infrastructure services globally through a combination of technology knowhow, domain and process expertise. The Role: We are looking for a Databricks Lead resource (with experience in AWS). Key Responsibilities: Develop and optimize ETL pipelines from various data sources using Databricks on cloud (AWS, Azure, etc.) Experienced in implementing standardized pipelines with automated testing, DevOps for CI/CD, Terraform for infrastructure as code, and Ganglia for monitoring. Continuously improve systems through performance enhancements and cost reductions in compute and storage Develop and maintain data pipelines for effective data extraction transformation and loading. Data Processing and API Integration: Utilize Spark Structured Streaming for real-time data processing and integrate data outputs with REST APIs Develop and maintain Scalable Architecture database design and data pipelines. Data Engineering: Develop and optimize data pipelines for ingesting, processing, and transforming large volumes of data from diverse sources using Databricks and related technologies. Performance Optimization: Optimize Databricks jobs, queries, and workflows for performance, scalability, and cost-efficiency, leveraging best practices and optimization techniques. Lead Data Engineering Projects to manage and implement data-driven systems. Develop and maintain data quality standards. Implement security controls, access management policies, and data governance frameworks within Databricks to ensure data privacy, compliance, and regulatory requirements. Integrate data across different systems and platforms. Production deployment experience with data governance solutions and hands-on experience with cloud data lakes. Skills Required: 5+ years of relevant experience and with overall experience of 8-10 years of experience. Experience in developing and implementing ETL pipelines from various data sources using Databricks on cloud (AWS, Azure) Proven experience designing, implementing, and managing data analytics using Databricks. Strong proficiency in Apache Spark, SQL, Python with hands-on experience developing data pipelines. Technologies - IaaS (AWS or Azure or GCP), Databricks platform, Delta Lake storage, Spark (PySpark, Spark SQL). Ability to work independently with some level of ambiguity and juggle multiple demands. Understand and document data transformation requirements. Provide technical guidance and mentorship to junior team members. Experience working in Insurance domain. Data Modeling/ Data Lineage and awareness of Canonical data model implementation Good to have: Experience in Airflow, Splunk, Power BI, Git, Azure Devops Behavioral Skills: Must have excellent communication skills to closely work with Customers. Resolve technical issues of projects. Participates as a team member. Effectively collaborates and communicates with the stakeholders and ensure client satisfaction Qualifications & Experience: Somebody who has at least 8+ years of experience in Databricks Implementation / Development. Good to have Power BI Architecture Experience / Exposure. Experience of working in Cloud Platforms like Azure, AWS and using services like Data Factory, Databricks, Glue, S3, Athena Education qualification: Any bachelor's degree from a reputed college. Thanks & Regards, Vijaya N vijaya.nakka@araresources.com
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2