Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 6.0 years
5 - 9 Lacs
Hubli, Mangaluru, Mysuru
Work from Office
Build of interim archive solution for EDI on the Clients Maestrplatform which is a self-service, cloud-based platform on Azure Data Lake Storage. Experience in storage and retrieval of EDI message provided by the cloud-based EDI Seeburger platform.Experience in Azure Data Integration Engineer with expertise in Azure Data Factory, Databricks, Data Lake, Key Vault, Azure Active Directory. Understand security/encryption considerations and options
Posted 1 month ago
2.0 - 5.0 years
4 - 8 Lacs
Bengaluru
Work from Office
The ability to be a team player The ability and skill to train other people in procedural and technical topics Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Able to write complex SQL queries ; Having experience in Azure Databricks Preferred technical and professional experience Excellent communication and stakeholder management skills
Posted 1 month ago
2.0 - 5.0 years
4 - 8 Lacs
Kolkata
Work from Office
The ability to be a team player The ability and skill to train other people in procedural and technical topics Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Able to write complex SQL queries ; Having experience in Azure Databricks Preferred technical and professional experience Excellent communication and stakeholder management skills
Posted 1 month ago
5.0 - 10.0 years
18 - 33 Lacs
Coimbatore
Hybrid
We are looking Data Software Engineer_Coimbatore (Hybrid) Role 1 : Python, Spark, Azure Databricks Role 2: Python, Spark, GCP Role 3 : Python.Spark,AWS,Kafka Experience : 5-12 Years Detailed JD: 1. 5-12 Years of in Big Data & Data related technology experience 2. Expert level understanding of distributed computing principles 3. Expert level knowledge and experience in Apache Spark 4. Hands on programming with Python 5. Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Interview Mode : Only F2F If you're available for the F2F interview 12th Jul 2025 - Pls Applied for this Role.
Posted 1 month ago
8.0 - 12.0 years
22 - 27 Lacs
Bengaluru
Work from Office
KEY RESPONSIBILITIES We are looking for a highly experienced and hands-on Senior Data Architect to lead the design and implementation of scalable, secure, and high-performance data solutions across Azure and multi-cloud environments. The ideal candidate will have deep expertise in Azure Databricks, Azure Data Factory, Unity Catalog, and streaming/time-series data architectures. Experience in the life sciences industry is a strong plus. More specifically you will lead the following responsibilities: Implement enterprise-grade data platforms using Azure and other cloud technologies Leverage Azure Databricks , Data Factory , and Unity Catalog for data engineering, orchestration, and governance Design and optimize streaming and time-series data pipelines for real-time analytics and operational intelligence Build and manage functional and scalable data models to support diverse business use cases Define and enforce data architecture standards, best practices, and security policies Collaborate with data engineers, scientists, and business stakeholders to translate requirements into robust data solutions Create reusable components for rapid development of data platform Drive initiatives around data quality , metadata management , and data lifecycle governance Assist data scientists and visualization developers in developing and deploying algorithms for all analytics needs Stay current with emerging technologies and trends, especially in regulated industries like life sciences YEAR ONE CRITICAL SUCCESS FACTORS Successfully complete project implementations within time, cost and quality standards, in line with business and IT expectations Successfully complete implementation of various data integration initiatives Successfully develop capabilities that ensure data accuracy, integrity and accessibility PROFESSIONAL EXPERIENCE / QUALIFICATIONS Bachelors or Masters degree in in Computer Science or Engineering OR equivalent years of work experience 10+ years of proven experience in architecting and implementing Cloud data lake, Data warehouse platforms, Master data Management, data integration and OLTP database solutions 10+ years of experience in data architecture, data engineering, and cloud-based data platforms 5+ years of hands-on experience with Azure cloud services Expertise in: Azure Databricks, Data Factory, Unity Catalog Streaming technologies (e.g., Spark Structured Streaming, Azure Event Hubs, Kafka) Time-series data modelling and analytics Functional data modelling (e.g., dimensional, normalized, data vault) A comprehensive understanding of data lakehouse processes and the supporting technologies such as, Azure Data Factory, Azure Databricks, ADLS Gen2, IoT and other cloud technologies Strong knowledge of industry best practices around data architecture in both cloud based and on prem big data solutions Exceptional understanding of building solution architectures, design patterns, network topology and data security frameworks Experience of architecting data solution across hybrid (cloud, on premise) data platforms A comprehensive understanding of the principles of and best practices behind data engineering, and the supporting technologies such as RDBMS, NoSQL, Cache & In-memory stores Excellent problem solving and data modelling skills (logical, physical, sematic and integration models) including normalisation, OLAP / OLTP principles and entity relationship analysis Prior experience working with Middleware platforms like Mulesoft Comprehensive experience working with SQL Server, Azure SQL, RDS, CosmosDB, MongoDB, PostgreSQL Experience with stream processing tools (Spark structured streaming, Azure Stream Analytics, Storm, etc.) Familiarity with Agile / Scrum methodologies Excellent verbal and written English skills Proven experience in life sciences or other regulated industries Equal Opportunity Employer Biocon Biologics is committed to equal opportunity in the terms and conditions of employment for all employees and job applicants without regard to race, colour, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, disability or veteran status. Biocon Biologics also complies with all applicable national, state and local laws governing non-discrimination in employment as well as work authorisation and employment eligibility verification requirements of the Immigration and Nationality Act.
Posted 1 month ago
5.0 - 9.0 years
10 - 19 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Role & responsibilities Azure data bricks developer with 5 to 10 years of experience We are seeking a skilled Azure Databricks Developer to design, develop, and optimize big data pipelines using Databricks on Azure. The ideal candidate will have strong expertise in PySpark, Azure Data Lake, and data engineering best practices in a cloud environment. Key Responsibilities: Design and implement ETL/ELT pipelines using Azure Databricks and PySpark. Work with structured and unstructured data from diverse sources (e.g., ADLS Gen2, SQL DBs, APIs). Optimize Spark jobs for performance and cost-efficiency. Collaborate with data analysts, architects, and business stakeholders to understand data needs. Develop reusable code components and automate workflows using Azure Data Factory (ADF). Implement data quality checks, logging, and monitoring. Participate in code reviews and adhere to software engineering best practices. Required Skills & Qualifications: 3-5 years of experience in Apache Spark / PySpark. 3-5 years working with Azure Databricks and Azure Data Services (ADLS Gen2, ADF, Synapse). Strong understanding of data warehousing, ETL, and data lake architectures. Proficiency in Python and SQL. Experience with Git, CI/CD tools, and version control practices. If you are Interested, please reply back with the below details to bhuvaneshwari.kr@cgi.com Name: Mobile No: Email id: Total experience: Current company: Current CTC: Expected CTC: Notice period: Available for Face to Face interview on 5 July ( Saturday) (Yes/No):
Posted 1 month ago
5.0 - 8.0 years
10 - 20 Lacs
Hyderabad/Secunderabad
Work from Office
Title: Sr Azure Data Engineer Experience: 5 to 8 Years Location: Hyderabad Working Hours: 12.00pm to 9.00pm Key Skills: SQL, Azure Data factory, Azure Data Lake, Azure Data bricks, Synapse, Data Fabric Architecture. We are looking for an experienced data engineer to join our team. You will use various methods to transform raw data into useful data systems. For example, you'll create pipelines and help to design data warehousing systems. Overall, you'll strive for efficiency by aligning data systems with business goals. To succeed in this data engineering position, you should have strong analytical skills and the ability to combine data from different sources. Data engineer skills also include familiarity with several programming languages and knowledge of learning machine methods. Responsibilities Analyze and organize raw data Build data systems and pipelines to support Power BI or another analytical tools reports and dashboards Evaluate business needs and objectives Conduct complex data analysis and report on results Combine raw information from different sources Explore ways to enhance data quality and reliability Collaborate with data scientists and architects on several projects Requirements and skills Previous experience as a data engineer or in a similar role Experience with Data Fabric Architecture Technical expertise with data models, data mining, and segmentation techniques Hands-on experience with SQL database design Experience with SQL Server querying, stored procedures Great numerical and analytical skills Experience with Azure synapse, snowflake, or another cloud-based data warehousing. Experience with ETL, pipelines Degree in Computer Science, IT, or similar field
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Pune
Work from Office
•Experience with ETL testing. •Ability to create data bricks notebooks to automate the manual tests •Ability to create and run Test pipelines and interpret the results. •Ability to test complex reports and write queries to check each metrics Required Candidate profile •Experience in Azure Databricks and SQL queries - ability to analyse data in a Data Warehouse environment. •Ability to test complex reports and write queries to check each metrics.
Posted 1 month ago
5.0 - 10.0 years
25 - 35 Lacs
Pune, Ahmedabad
Work from Office
Job Title: Sr. Data Engineer Location: Ahmedabad/Pune Experience: 5 + Years Educational Qualification: UG: BS/MS in Computer Science or other engineering/technical degree Roles and responsibilities: Design and implement end-to-end data solutions using Microsoft Azure Data Factory (ADF) and Azure Data Lake Storage (ADLS), Azure Databricks, and SSIS. Develop complex transformation logic using SQL Server, SSIS, and ADF, and develop ETL jobs/pipelines to execute those mappings concurrently. Maintain and enhance existing ETL Pipelines, Warehouses, and Reporting leveraging traditional MS SQL stack. Understanding of REST API principles and creating ADF pipelines to handle HTTP requests for APIs. Well-versed with best practices for development and deployment of SSIS packages, SQL jobs, and ADF pipelines. Implement and manage source control practices using GIT within Azure DevOps to ensure code integrity and facilitate collaboration Participate in the development and maintenance of CI/CD pipelines for automated testing and deployment of BI solutions Preferred skills, but not required: Understanding of the Azure environment and developing Azure Logic Apps and Azure Function Apps. Understanding of Code deployment, GIT, CI/CD, and deployment of developed ETL code (SSIS, ADF).
Posted 1 month ago
6.0 - 11.0 years
4 - 7 Lacs
Ghaziabad
Remote
Dear Candidate, We are looking for a data engineer trainer with Databricks& Snowflake on a part- time basis and can provide training to our US students. Please find the job description for your reference. If it seems good fit for you, please revert to us with your updated resume. Job Summary We are looking for a skilled and experienced Data Engineer Trainer to join our team! In this role, you will deliver training content to our US based students in Data Engineer with Snowflake & Databricks . You will have an opportunity to combine a passion for teaching, with enthusiasm for technology, to drive learning and establish positive customer relationships. You should have excellent communication skills and proven technology training experience. Key job responsibilities In this role, you will be at the heart of the world class programs delivered by Synergistic Compusoft Pvt Ltd- Your job responsibilities will include : 1. Training working professionals on in-demand skills like Data Bricks, Snowflake , Azure Data Lake. 2. Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) 3. Good Knowledge and Understanding of Data warehouse concepts. 4. Experience with designing implementation modern data platforms (Data Fabrics, Data Mesh, Data Hubs etc,) 5. Experience with the design of Data Catalogs/dictionaries driven by active metadata 6. Delivering highly interactive lectures online that are in line with Synergistic Compusofts teaching methodology. 7. Develop cutting edge and innovative content for classes to help facilitate delivery of classes in an interesting way. 8. Strong programming skills in languages such as SQL, Python, or Scala. 9. Knowledge of data integration patterns, data lakes, and data warehouses. 10. Experience with data quality, data governance, and data security best practices. Note: Trainers have to make our students certify on any global azure certification & deliver content accordingly. Excellent communication and collaboration skills. Primary Skills: Databricks , Snowflake Secondary Skills: ADF, Databricks, Python, Perks and Benefits Remuneration Best in the Industry(55-60k per month) 5 Days working (Mon- Fri) For Part Time- 2.5 to 3 hours - Remote (Night Shift -10:30 Pm onwards) The curriculum and syllabus should be provided by the trainer The curriculum and syllabus should align with the Azure Certification requirements. The duration of a single batch depends on the trainer, but it cannot exceed more than 3 months. Company Website- www.synergisticit.com Companys LinkedIn profile- https://www.linkedin.com/redir/redirect?url=https%3A%2F%2Fsynergisticit%2Ecom%2F&urlhash=rKyX&trk=about_website
Posted 1 month ago
5.0 - 10.0 years
15 - 30 Lacs
Gurugram, Chennai
Hybrid
Highly experienced in designing and implementing complex data pipelines applying data engineering techniques, using Azure Data Factory, Databricks Experienced in Big Data Technologies Experienced in development using Pyspark Experienced in performance optimization on large data set (multi-million records) Good problem solving skills Good understanding of Azure services Collaborative, good communication
Posted 1 month ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Highly experienced in designing and implementing complex data pipelines applying data engineering techniques, using Azure Data Factory, Databricks Experienced in Big Data Technologies Experienced in development using Pyspark Experienced in performance optimization on large data set (multi-million records) Good problem solving skills Good understanding of Azure services Collaborative, good communication
Posted 1 month ago
10.0 - 16.0 years
15 - 25 Lacs
Pune, Chennai, Bengaluru
Work from Office
Roles and Responsibilities Design, develop, and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, and load data from various sources into Azure Data Lake Storage (ADLS). Develop complex SQL queries to optimize database performance and troubleshoot issues in Azure SQL databases. Collaborate with cross-functional teams to gather requirements for data processing needs and design solutions that meet business needs. Implement data quality checks using PySpark on big data datasets stored in Azure Blobs or ADLS. Troubleshoot technical issues related to ADF workflows, SQL queries, and Python scripts. Desired Candidate Profile 8+ years of experience as an Azure Data Engineer with expertise in ADF, ADLS Gen2, Azure Data Lake, Data Bricks, Pyspark, SQL, Python. Bachelor's degree in Any Specialization (BCA/B.Tech/B.E.). Strong understanding of cloud computing concepts and experience working with Microsoft Azure platform. Location: Chennai, Coimbatore, Hyderabad, Bangalore, Pune & Gurgaon.
Posted 1 month ago
5.0 - 10.0 years
15 - 25 Lacs
Pune, Chennai, Bengaluru
Work from Office
Roles and Responsibilities Design, develop, and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, and load data from various sources into Azure Data Lake Storage (ADLS). Develop complex SQL queries to optimize database performance and troubleshoot issues in Azure SQL databases. Collaborate with cross-functional teams to gather requirements for data processing needs and design solutions that meet business needs. Implement data quality checks using PySpark on big data datasets stored in Azure Blobs or ADLS. Troubleshoot technical issues related to ADF workflows, SQL queries, and Python scripts. Desired Candidate Profile 5-10 years of experience as an Azure Data Engineer with expertise in ADF, ADLS Gen2, Azure Data Lake, Data Bricks, Pyspark, SQL, Python. Bachelor's degree in Any Specialization (BCA/B.Tech/B.E.). Strong understanding of cloud computing concepts and experience working with Microsoft Azure platform. Location: Chennai, Coimbatore, Hyderabad, Bangalore, Pune & Gurgaon.
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad, Gurugram
Work from Office
Job Description About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role: Azure Data Engineer. Experience: 4+ Years Skill Set: Azure Synapse, Pyspark, ADF and SQL. Location: Pune, Hyderabad, Gurgaon 5+ years of experience in software development, technical operations, and running large-scale applications. 4+ years of experience in developing or supporting Azure Data Factory (API/APIM), Azure Databricks, Azure DevOps, Azure Data Lake storage (ADLS), SQL and Synapse data warehouse, Azure Cosmos DB 2+ years of experience working in Data Engineering Any experience in data virtualization products like Denodo is desirable Azure Data Engineer or Solutions Architect certification is desirable Should have a good understanding of container platforms like Docker and Kubernetes. Should be able to assess the application/platform time to time for architectural improvements and provide inputs to the relevant teams Very Good troubleshooting skills (quick identification of the application issues and providing quick resolutions with no or minimal user/business impact) Hands-on experience in working with high-volume, mission-critical applications Deep appreciation of IT tools, techniques, systems, and solutions. Excellent communication skills along with experience in driving triage calls which involves different technical stake holders Has creative problem-solving skills related to cross-functional issues amidst the changing priorities. Should be flexible and resourceful to swiftly manage the changing operational goals and demands. Good experience in handling escalations and take complete responsibility and ownership of all critical issues to get a technical/logical closure. Good understanding of the IT Infrastructure Library (ITIL) framework and various IT Service Management (ITSM) tools available in the marketplace
Posted 1 month ago
5.0 - 10.0 years
15 - 25 Lacs
Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR
Hybrid
Ready to build the future with AI? At Genpact, we don't just keep up with technology we set the pace. AI and digital innovation are redefining industries, and were leading the charge. Genpacts AI Gigafactory , our industry-first accelerator, is an example of how were scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of whats possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Principal Consultant- Databricks Developer AWS! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications • Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have excellent coding skills either Python or Scala, preferably Python. • Must have experience in Data Engineering domain . • Must have implemented at least 2 project end-to-end in Databricks. • Must have at least experience on databricks which consists of various components as below o Delta lake o dbConnect o db API 2.0 o Databricks workflows orchestration • Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. • Must have good understanding to create complex data pipeline • Must have good knowledge of Data structure & algorithms. • Must be strong in SQL and sprak-sql. • Must have strong performance optimization skills to improve efficiency and reduce cost. • Must have worked on both Batch and streaming data pipeline. • Must have extensive knowledge of Spark and Hive data processing framework. • Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. • Must be strong in writing unit test case and integration test • Must have strong communication skills and have worked on the team of size 5 plus • Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. • Good to have Databricks SQL Endpoint understanding. • Good To have CI/CD experience to build the pipeline for Databricks jobs. • Good to have if worked on migration project to build Unified data platform. • Good to have knowledge of DBT. • Good to have knowledge of docker and Kubernetes. br /> Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries. Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 1 month ago
4.0 - 9.0 years
4 - 9 Lacs
Kolkata, Hyderabad, Bengaluru
Work from Office
Responsibilities A day in the life of an Infoscion • As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. • You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. • You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. • You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: • Knowledge of more than one technology • Basics of Architecture and Design fundamentals • Knowledge of Testing tools • Knowledge of agile methodologies • Understanding of Project life cycle activities on development and maintenance projects • Understanding of one or more Estimation methodologies, Knowledge of Quality processes • Basics of business domain to understand the business requirements • Analytical abilities, Strong Technical Skills, Good communication skills • Good understanding of the technology and domain • Awareness of latest • Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods technologies and trends • Excellent problem solving, analytical and debugging skills Technical and Professional Requirements: Primary skills-Azure data bricks Preferred Skills: Technology->Cloud Platform->Azure Development & Solution Architecting
Posted 1 month ago
5.0 - 10.0 years
15 - 25 Lacs
Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR
Hybrid
Ready to build the future with AI? At Genpact, we don't just keep up with technology we set the pace. AI and digital innovation are redefining industries, and were leading the charge. Genpacts AI Gigafactory , our industry-first accelerator, is an example of how were scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of whats possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Principal Consultant- Databricks Developer AWS! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications • Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have excellent coding skills either Python or Scala, preferably Python. • Must have experience in Data Engineering domain . • Must have implemented at least 2 project end-to-end in Databricks. • Must have at least experience on databricks which consists of various components as below o Delta lake o dbConnect o db API 2.0 o Databricks workflows orchestration • Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. • Must have good understanding to create complex data pipeline • Must have good knowledge of Data structure & algorithms. • Must be strong in SQL and sprak-sql. • Must have strong performance optimization skills to improve efficiency and reduce cost. • Must have worked on both Batch and streaming data pipeline. • Must have extensive knowledge of Spark and Hive data processing framework. • Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. • Must be strong in writing unit test case and integration test • Must have strong communication skills and have worked on the team of size 5 plus • Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. • Good to have Databricks SQL Endpoint understanding. • Good To have CI/CD experience to build the pipeline for Databricks jobs. • Good to have if worked on migration project to build Unified data platform. • Good to have knowledge of DBT. • Good to have knowledge of docker and Kubernetes. br /> Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries. Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 1 month ago
5.0 - 10.0 years
12 - 22 Lacs
Pune, Bengaluru
Hybrid
Job Summary We are looking for a highly skilled Cloud Engineer with a strong background in real-time and batch data ingestion and data processing, azure products-devOps, azure cloud. The ideal candidate should have a deep understanding of streaming architectures and performance optimization techniques in cloud environments, preferably in subsurface domain. Key Responsibilities: Automation experience essential : Scripting, using PowerShell. ARM Templates, using JSON (PowerShell also acceptable) Azure DevOps with CI/CD, Site Reliability Engineering Must be able to understand the concept of how the applications function. The ability to priorities workload and operate across several initiatives simultaneously Update and maintain the Kappa-Automate database and connectivity with the pi historian and data lake Participate in troubleshooting, performance tuning, and continuous improvement of the Kappa Automate platform Designing and implementing highly configurable Deployment pipelines in Azure Configuring Delta Lake on Azure Databricks Apply performance tuning techniques such as partitioning, caching, and cluster Working on various Azure storage types Work with large volumes of structured and unstructured data, ensuring high availability and performance. Collaborate with cross-functional teams (data scientists, analysts, business users) Qualifications • Bachelors or Masters degree in Computer Science, Information Technology, or a related field. • 8+ years of experience in data engineering or a related role. • Proven experience with Azure technologies.
Posted 1 month ago
5.0 - 10.0 years
15 - 25 Lacs
Pune
Remote
Role & responsibilities At least 5 years of experience in data engineering with a strong background on Azure Databricks and Scala/Python and Streamlit •Experience in handling unstructured data processing and transformation with programming knowledge. •Hands on experience in building data pipelines using Scala/Python •Big data technologies such as Apache Spark, Structured Streaming, SQL, Databricks Delta Lake •Strong analytical and problem solving skills with the ability to troubleshoot spark applications and resolve data pipeline issues. •Familiarity with version control systems like Git, CICD pipelines using Jenkins.
Posted 1 month ago
8.0 - 12.0 years
10 - 30 Lacs
Hyderabad, Pune, Delhi / NCR
Work from Office
Cigniti Technologies/ Coforge Job Description Azure Data Lead: Design, develop, and maintain scalable ETL pipelines using Azure Services to process, transform, and load large datasets into MS Fabric One lake or other data stores. Collaborate with cross-functional teams, including data architects, analysts, and business stakeholders, to gather data requirements and deliver efficient data solutions. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure. Work together with data scientists and analysts to understand the needs for data and create effective data workflows. Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations. Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Monitoring and resolving data pipeline problems will guarantee consistency and availability of the data. Strong experience in common data warehouse modelling principles including Kimball, Inmon. Knowledge in Azure Databricks, Azure IoT, Azure HDInsight + Spark, Azure Stream Analytics. Working knowledge of Python is desirable. Mandatory Skills: AZURE DATA FACTORY, Azure Data Bricks, SQL, Python, Pyspark. Years Of Experience: 8.0 to 12 Years Location: Hyderabad, Pune, or Noida NP: Immediate to 15 days Interview process : 2 rounds and Virtual
Posted 1 month ago
3.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills Technical and Professional : Primary skills:Technology-Cloud Platform-Azure Development & Solution Architecting
Posted 1 month ago
5.0 - 9.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering,BCA,BSc,MCA,MTech,MSc Service Line Data & Analytics Unit Responsibilities "1. 5-8 yrs exp in Azure (Hands on experience in Azure Data bricks and Azure Data Factory)2. Good knowledge in SQL, PySpark.3. Should have knowledge in Medallion architecture pattern4. Knowledge on Integration Runtime5. Knowledge on different ways of scheduling jobs via ADF (Event/Schedule etc)6. Should have knowledge of AAS, Cubes.7. To create, manage and optimize the Cube processing.8. Good Communication Skills.9. Experience in leading a team" Additional Responsibilities: Good knowledge on software configuration management systems Strong business acumen, strategy and cross-industry thought leadership Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Two or three industry domain knowledge Understanding of the financial processes for various types of projects and the various pricing models available Client Interfacing skills Knowledge of SDLC and agile methodologies Project and Team management Preferred Skills: Technology-Big Data - Data Processing-Spark
Posted 1 month ago
5.0 - 9.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering,BSc,BCA,MSc,MCA,MTech Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Awareness of latest Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methodstechnologies and trends Excellent problem solving, analytical and debugging skills Technical and Professional : Primary skills-Azure data bricks Preferred Skills: Technology-Cloud Platform-Azure Development & Solution Architecting
Posted 1 month ago
3.0 - 5.0 years
5 - 8 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
About the Role: We're hiring 2 Cloud & Data Engineering Specialists to join our fast-paced, agile team. These roles are focused on designing, developing, and scaling modern, cloud-based data engineering solutions using tools like Azure, AWS, GCP, Databricks, Kafka, PySpark, SQL, Snowflake, and ADF. Key Responsibilities: Develop and manage cloud-native solutions on Azure or AWS Build real-time streaming apps with Kafka Engineer services using Java and Python Deploy and manage Kubernetes-based containerized applications Process big data using Databricks Administer SQL Server and Snowflake databases, write advanced SQL Utilize Unix/Linux for system operations Must-Have Skills: Azure or AWS cloud experience Kafka, Java, Python, Kubernetes Databricks, SQL Server, Snowflake Unix/Linux commands. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough