Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
18 - 22 Lacs
Hyderabad, Bengaluru
Work from Office
To Apply - Mandatory to submit Details via Google Form - https://forms.gle/cCa1WfCcidgiSTgh8 Position : Senior Data Engineer - Total 8+ years Required Relevant 6+ years in Databricks, AWS, Apache Spark & Informatica (Required Skills) As a Senior data Engineer in our team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced data Engineer to design, implement, and maintain robust data pipelines and analytics solutions using databricks & AWS services. The ideal candidate will have a strong background in data services, big data technologies, and programming languages. Role & responsibilities Technical Leadership: Guide and mentor teams in designing and implementing Databricks solutions. Architecture & Design: Develop scalable data pipelines and architectures using Databricks Lakehouse. Data Engineering: Lead the ingestion and transformation of batch and streaming data. Performance Optimization: Ensure efficient resource utilization and troubleshoot performance bottlenecks. Security & Compliance: Implement best practices for data governance, access control, and compliance. Collaboration: Work closely with data engineers, analysts, and business stakeholders. Cloud Integration: Manage Databricks environments on Azure, AWS, or GCP. Monitoring & Automation: Set up monitoring tools and automate workflows for efficiency. Qualifications: 6+ years of experience in Databricks, AWS and 4+ Apache Spark, and Informatica. Excellent problem-solving and leadership skills. Good to have these skills 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile (Good to have) 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: Good to have - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies
Posted 1 week ago
5.0 - 10.0 years
10 - 20 Lacs
Pune
Hybrid
We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have a strong background in designing, building, and optimizing data pipelines and architectures to support our growing data-driven initiatives. Knowledge of machine learning techniques and frameworks is a significant advantage and will allow you to collaborate closely with our data science team. Key Responsibilities: - Design, implement, and maintain scalable data pipelines for collecting, processing, and analyzing large datasets. - Build and optimize data architectures to support business intelligence, analytics, and machine learning models. - Collaborate with data scientists, analysts, and software engineers to ensure seamless data integration and accessibility. - Develop and maintain ETL (Extract, Transform, Load) workflows and tools. - Monitor and troubleshoot data systems to ensure high availability and performance. - Implement and enforce best practices for data security, governance, and quality. - Evaluate and integrate new technologies to enhance data engineering capabilities. Qualifications: - Bachelors or Masters degree in Computer Science, Data Engineering, or a related field. - Proven experience as a Data Engineer or in a similar role. - Proficiency in programming languages such as Python, Java, or Scala. - Hands-on experience with data pipeline tools (e.g., Apache Airflow, AWS Glue). - Strong knowledge of SQL and database systems (e.g., PostgreSQL, MySQL, MongoDB). - Experience with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Hadoop, Spark). - Familiarity with data modeling, schema design, and data warehousing concepts. - Understanding of CI/CD pipelines and version control systems like Git. Preferred Skills: - Familiarity with machine learning frameworks (e.g., TensorFlow, PyTorch, scikit-learn). - Experience deploying machine learning models and working with MLOps tools. - Knowledge of distributed systems and real-time data processing (e.g., Kafka, Flink).
Posted 1 week ago
6.0 - 8.0 years
1 - 6 Lacs
Kolkata, Pune
Work from Office
Job Title: Developer Work Location: Pune -MH,Kolkata -WB Skill Required: ORACLE ,SQL, Databricks Experience Range : 6-8 Years Job Description: Financial Crime experience, SQL, Data Modeling, System Analysis, Databricks engineering and architecture, Database admiration, project and resource planning and management Essential Skills: Financial Crime experience, SQL, Data Modeling, System Analysis, Databricks engineering and architecture, Database admiration, project and resource planning and management Desirable Skills:Pyspark
Posted 1 week ago
5.0 - 10.0 years
10 - 20 Lacs
Kochi
Remote
Senior Data Engineer (Databricks) REMOTE Location: Remote (Portugal) Type: Contract Experience: 5+ Years Language: Fluent English required We are looking for a Senior Data Engineer to join our remote consulting team. In this role, you'll be responsible for designing, building, and optimizing large-scale data processing systems using Databricks and modern data engineering tools. You’ll collaborate closely with data scientists, analysts, and technical teams to deliver scalable and reliable data platforms. Key Responsibilities Design, develop, and maintain robust data pipelines for processing structured/unstructured data Build and manage data lakes and data warehouses optimized for analytics Optimize data workflows for performance, scalability, and cost-efficiency Collaborate with stakeholders to gather data requirements and translate them into scalable solutions Implement data governance, data quality, and security best practices Migrate legacy data processes (e.g., from SAS) to modern platforms Document architecture, data models, and pipelines Required Qualifications 5+ years of experience in data engineering or related fields 3+ years of hands-on experience with Databricks Strong command of SQL and experience with PostgreSQL, MySQL, or NoSQL databases Programming experience in Python, Java, or Scala Experience with ETL processes, orchestration frameworks, and data pipeline automation Familiarity with Spark, Kafka, or similar big data tools Experience working on cloud platforms (AWS, Azure, or GCP) Prior experience migrating from SAS is a plus Excellent communication skills in English
Posted 1 week ago
12.0 - 15.0 years
20 - 32 Lacs
Hyderabad
Hybrid
JOB DESCRIPTION Position: Lead Software / Platform Engineer II (Magnet) Job Location: Pune, India Work Arrangement: Hybrid Line of Business: Sub-Line of Business: EMEA Technology Department: Tech Org: GOSC Operations 41165 Assignment Category Grade: Full-time GG12 Technical Skills: .Net and up, Java, Microservices, UI/ReactJS, Databricks & Synapse, AS 400 Hiring Manager: Rajdip Pal / Fernando Garcia-Monteavaro Job Description and Requirements Role Value Proposition The MetLife EMEA Technology organization is evolving to enable MetLifes New Frontier strategy. With a strong vision in place, we are a function focused on driving digital technology strategies for key technology functions within MetLife including. In partnership with our business leaders, we develop and deliver seamless technology experiences to our employees across the entire employee lifecycle. Our vision and mission is to create innovative, transformative and contemporary technology solutions to empower our leaders and employees so they can focus on what matters most, our customers. We are technologists with strong business acumen focused on developing our talent to continually transform and innovate. As part of Tech Talent Transformation (T3) agenda, MetLife is establishing a Technology Center in India. This technology center will perform as an integrated organization between onshore, offshore, and strategic vendor partners in an Agile delivery model. We are seeking a leader who can spread the MetLife culture and develop a local team culture, partnering with HR Leaders to attract, develop and retain talents across the organization. This role will also be involved in project delivery, bringing technical skills and execution experience. As a Magnet, you are the human anchor of your team fostering a culture of trust, inclusion, and MetLife belonging feeling. This role is vital for the well-being of the team. Tasks management or delivery outcomes of the team are managed within ADM teams aligned on specific capabilities or product. However, when delivery issues arise due to local challenges in the Technology Center, you are expected to step in, facilitate discussions to align with local stakeholders, and support escalation or resolution as needed. You maintain close contact with local leadership or operational teams in the countries, being aware of local priorities. Strong technical background is also required as you will be involved in technical work performed the market covered by the role, integrated with related ADM teams. Key Relationships: Internal Stake Holder – EMEA ART Leader, ART Leadership team, India EMEA Technology AVP, and Business process Owners for EMEA Technology. Key Responsibilities: Develop strong Technology capabilities to support EMEA agenda adopting Agile ways of Working in the software delivery lifecycle (Architecture, Design, Development, Testing & Production). Partner with internal business process owners, technical team members, and senior management throughout the project life cycle. Act as the first point of contact for team members on well-being, interpersonal dynamics, and professional development. Foster an inclusive team culture that reflects the company’s values. Support conflict resolution and encourage open, honest communication. Identify and escalate human-related challenges to People & Culture teams when needed. In case of delivery issues tied to team dynamics, collaborate with relevant stakeholders to understand the context and contribute to resolution. Stay connected with local or operational teams to understand business priorities and team-specific realities. Help new joiners integrate into the team from a cultural and human perspective. Serve as a bridge between individuals and leadership, including in-country teams regarding people-related matters. Education: Bachelor of Computer Science or equivalent. Technical Stack: Competencies - Facilitation Level 3- Working Experience Tech Stack (subject to role) - Development Frameworks and Languages: .Net and up, Java, Microservices, UI/ReactJS, Databricks & Synapse, AS 400 Data Management: Database (SQL Server), APIs (APIC, APIM), REST API Development & Delivery Methods: Agile (Scaled Agile Framework) DevOps and CI/CD: Containers (Azure Kubernetes), CI/CD (Azure DevOps, Git, SonarQube), Scheduling Tools (Azure Scheduler) Development Tools & Platforms: IDE (GitHub Copilot, VSCODE), Cloud Platform (Microsoft Azure) Security and Monitoring: Secure Coding (Veracode), Authentication/Authorization (CA SiteMinder, MS Entra, PingOne), Log & Monitoring (Azure AppInsights, Elastic) Writing and executing automated tests -- Using Java and Javscript, Selenium, and using test automation framework Other Critical requirements – Proficiency in multiple programming languages and frameworks. Strong problem-solving skills. Experience with agile methodologies and continuous integration/continuous deployment (CI/CD). Ability to work in a team and communicate effectively. Exposure to conflict resolution, mediation, or active listening practices. Understanding of psychological safety and team health dynamics. Familiarity with diversity, equity, and inclusion principles. Experience working across functions or cultures is a plus. Prior experience in mentoring, team facilitation, coaching, or peer support roles is a plus Soft Skills: Excellent problem-solving, communication, and stakeholder management skills. Ability to balance technical innovation with business value delivery. Business acumen: A level-headed, clear communicator to gain detailed level of understanding of organizational business requirements and business dynamics. Self-Motivated and able to work independently. Attention to detail Collaborative Team Player Decisive Supportive Passionate Professional Accountable
Posted 1 week ago
6.0 - 11.0 years
14 - 19 Lacs
Bengaluru
Remote
Role: Azure Specialist-CDM Smith Location:Bangalore Mode: Remote Education and Work Experience Requirements: Key Responsibilities: Databricks Platform: Act as a subject matter expert for the Databricks platform within the Digital Capital team, provide technical guidance, best practices, and innovative solutions. Databricks Workflows and Orchestration: Design and implement complex data pipelines using Azure Data Factory or Qlik replicate. End-to-End Data Pipeline Development: Design, develop, and implement highly scalable and efficient ETL/ELT processes using Databricks notebooks (Python/Spark or SQL) and other Databricks-native tools. Delta Lake Expertise: Utilize Delta Lake for building reliable data lake architecture, implementing ACID transactions, schema enforcement, time travel, and optimizing data storage for performance. Spark Optimization: Optimize Spark jobs and queries for performance and cost efficiency within the Databricks environment. Demonstrate a deep understanding of Spark architecture, partitioning, caching, and shuffle operations. Data Governance and Security: Implement and enforce data governance policies, access controls, and security measures within the Databricks environment using Unity Catalog and other Databricks security features. Collaborative Development: Work closely with data scientists, data analysts, and business stakeholders to understand data requirements and translate them into Databricks based data solutions. Monitoring and Troubleshooting: Establish and maintain monitoring, alerting, and logging for Databricks jobs and clusters, proactively identifying and resolving data pipeline issues. Code Quality and Best Practices: Champion best practices for Databricks development, including version control (Git), code reviews, testing frameworks, and documentation. Performance Tuning: Continuously identify and implement performance improvements for existing Databricks data pipelines and data models. Cloud Integration: Experience integrating Databricks with other cloud services (e.g., Azure Data Lake Storage Gen2, Azure Synapse Analytics, Azure Key Vault) for a seamless data ecosystem. Traditional Data Warehousing & SQL: Design, develop, and maintain schemas and ETL processes for traditional enterprise data warehouses. Demonstrate expert-level proficiency in SQL for complex data manipulation, querying, and optimization within relational database systems. Mandatory Skills: Experience in Databricks and Databricks Workflows and Orchestration Python: Hands-on experience in automation and scripting. Azure: Strong knowledge of Data Lakes, Data Warehouses, and cloud architecture. Solution Architecture: Experience in designing web applications and data engineering solutions. DevOps Basics: Familiarity with Jenkins and CI/CD pipelines. Communication: Excellent verbal and written communication skills. Fast Learner: Ability to quickly grasp new technologies and adapt to changing requirements. Cloud Integration: Experience integrating Databricks with other cloud services (e.g., Azure Data Lake Storage Gen2, Azure Synapse Analytics, Azure Key Vault) for a seamless data ecosystem Extensive experience with Spark (PySpark, Spark SQL) for large-scale data processing Additional Information: Qualifications - BE, MS, M.Tech or MCA. Certifications: Databricks Certified Associat
Posted 1 week ago
5.0 - 8.0 years
12 - 22 Lacs
Mumbai, Navi Mumbai, Mumbai (All Areas)
Hybrid
Must-Have Skills: 5 to 8 years of backend development experience Strong proficiency in Core Java and Scala Hands-on experience with Spring Framework Cloud technologies and microservices architecture Multithreaded programming and concurrency Scripting (e.g., Shell , Perl ) and UNIX/Linux Working knowledge of relational databases (Sybase, DB2) Excellent communication and problem-solving skills Data Bricks primary Good to Have: Experience with Apache Spark , Hadoop , or other big data tools Familiarity with FIX Protocol , MQ , XML/DTD Exposure to CI/CD tools and Test-Driven Development Domain understanding of Equity or Fixed Income trade lifecycle Strong debugging and performance profiling capabilities
Posted 1 week ago
4.0 - 9.0 years
15 - 25 Lacs
Noida, Hyderabad, Pune
Work from Office
We are looking for a skilled and passionate AWS Data Engineer to join our dynamic data engineering team. The ideal candidate will have strong experience in building scalable data pipelines and solutions using AWS, PySpark, Databricks, and Snowflake. Key Responsibilities: Design, develop, and maintain large-scale data pipelines on AWS using PySpark and Databricks. Work with Snowflake to perform data warehousing tasks including data loading, transformation, and optimization. Build efficient and scalable ETL/ELT workflows to support analytics and reporting. Collaborate with data analysts, architects, and stakeholders to understand business requirements and translate them into data solutions. Implement data quality checks, monitoring, and performance tuning of ETL processes. Ensure data governance, security, and compliance in all solutions developed. Required Skills & Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 3+ years of experience as a Data Engineer with strong exposure to AWS cloud services (S3, Lambda, Glue, Redshift, etc.). Hands-on experience with PySpark and Databricks for big data processing. Proficiency in working with Snowflake, including data modeling, SnowSQL, and performance optimization. Strong SQL and data wrangling skills. Familiarity with CI/CD tools and practices for data pipelines. Excellent problem-solving, communication, and collaboration skills.
Posted 1 week ago
12.0 - 20.0 years
20 - 35 Lacs
Hyderabad, Bengaluru
Hybrid
Role & responsibilities candidate must be Databricks Champion level in terms of expertise, who has proven hands-on experience in a lead engineer’ capacity, who has designed and implemented enterprise-level solutions in Databricks that leverage the core capabilities: Data Engineering (Delta Lake, PySpark framework, Python, PySpark SQL, Notebooks, Jobs, Workflows), Governance (Unity Catalog), Data Sharing (Delta Sharing, maybe even Clean Rooms, Marketplace). This experience must be recent, built up over the past 3 years at least. Must be good command of English language and ability to communicate clearly and to the point are of course a must.
Posted 1 week ago
8.0 - 13.0 years
30 - 40 Lacs
Bhopal, Pune, Chennai
Hybrid
Were Hiring: Senior Data Engineer | Hybrid | All Xebia Locations Apply by sending your resume to vijay.s@xebia.com Locations: Chennai, Hyderabad, Bangalore, Pune, Bhopal, Jaipur, Gurugram Mode: Hybrid 3 days/week from office Experience: 8+ Years Joining: Immediate or within 2 weeks About the Role: Xebia is seeking a Senior Data Engineer to join our Data Warehouse team. Youll be building cloud-native, data-intensive applications using modern tech stacks including Databricks, Airflow, Spark, and Terraform. Ideal for professionals with a strong foundation in Python, AWS, and modern data platforms. Key Responsibilities: Build and optimize robust Data Warehouse solutions Develop cloud-native, data-intensive applications (AWS experience required) Architect and implement Airflow -based workflow management systems Design, develop, and maintain Spark applications Work with modern data formats – Parquet, Delta Lake, OTFs Use IaC tools like Terraform/CDK/CloudFormation for infrastructure automation Establish observability via Datadog, Prometheus, or Grafana Drive CI/CD with GitHub Actions, Jenkins, or ArgoCD Collaborate across teams with business-aligned documentation and code Required Skills: 5+ years in Python, JVM, and Shell scripting (production experience) 3+ years in cloud-native data applications (must: AWS , good to have: GCP) Strong hands-on experience in Databricks , dbt , and Spark IaC tool expertise ( Terraform preferred) Airflow experience is mandatory Containerization (Docker/Kubernetes) understanding Experience with unit testing, CI/CD pipelines, and code reviews Excellent written & verbal communication skills Ability to convert business needs into data solutions How to Apply: Send your CV along with the following details to vijay.s@xebia.com : Full Name Total Experience Current CTC Expected CTC Current Location Preferred Xebia Location (Chennai, Hyderabad, Bangalore, Pune, Bhopal, Jaipur, Gurugram) Notice Period / Last Working Day (if serving) Primary Skills LinkedIn Profile Only apply if you're available to join immediately or within 2 weeks
Posted 1 week ago
6.0 - 9.0 years
12 - 20 Lacs
Bhubaneswar, Hyderabad
Work from Office
Design scalable data systems, develop analytics-ready models, build ETL pipelines, manage SQL/NoSQL DBs, integrate diverse data sources, orchestrate workflows (Airflow/Glue), and collaborate with teams. Skilled in Databricks, BigQuery, SQL, Python.
Posted 2 weeks ago
7.0 - 12.0 years
18 - 33 Lacs
Chennai, Bengaluru, PAN India
Work from Office
Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETRM systems. Work on data integration projects within the Energy Trading and Risk Management (ETRM) domain. Collaborate with cross-functional teams to integrate data from ETRM trading systems like Allegro, RightAngle, and Endur. Optimize and manage data storage solutions in Data Lake and Snowflake. Develop and maintain ETL processes using Azure Data Factory and Databricks. Write efficient and maintainable code in Python for data processing and analysis. Ensure data quality and integrity across various data sources and platforms. Ensure data accuracy, integrity, and availability across various trading systems. Collaborate with traders, analysts, and IT teams to understand data requirements and deliver robust solutions. Optimize and enhance data architecture for performance and scalability Mandatory Skills: Python/ pyspark Fast API Azure Data Factory (ADF) Databricks Data Lake Snowflake or SQL Share your CV to yeshwanthrajan.j@esolglobal.com Contact: +91 75987 36040 Join our WhatsApp group for job updates: https://chat.whatsapp.com/FTeFAsR7K8kLpX2eoROAEN
Posted 2 weeks ago
8.0 - 13.0 years
25 - 35 Lacs
Gurugram
Remote
Job description Data Engineer III/ IV - IN Work Location - Remote Job Description Summary The Data engineer is responsible for managing and operating upon Databricks, Dbt, SSRS, SSIS, AWS DWS, AWS APP Flow, PowerBI/Tableau. The engineer will work closely with the customer and team to manage and operate cloud data platform. Job Description Provides Level 3/4 operational coverage: Troubleshooting incident/problem, includes collecting logs, cross-checking against known issues, investigate common root causes (for example failed batches, infra related items such as connectivity to source, network issues etc.) Knowledge Management: Create/update runbooks as needed / Entitlements Governance: Watch all the configuration changes to batches and infrastructure (cloud platform) along with mapping it with proper documentation and aligning resources. Communication: Lead and act as a POC for customer from off-site, handling communication, escalation, isolating issues and coordinating with off-site resources while level setting expectation across stakeholders Change Management: Align resources for on-demand changes and coordinate with stakeholders as required Request Management: Handle user requests if the request is not runbook-based create a new KB or update runbook accordingly Incident Management and Problem Management, Root cause Analysis, coming up with preventive measures and recommendations such as enhancing monitoring or systematic changes as needed. KNOWLEDGE/SKILLS/ABILITY Good hands-on Databricks, Dbt, SSRS, SSIS, AWS DWS, AWS APP Flow, PowerBI/Tableau. Ability to read and write sql and stored procedures. Good hands-on experience in configuring, managing and troubleshooting along with general analytical and problem-solving skills. Excellent written and verbal communication skills. Ability to communicate technical info and ideas so others will understand. Ability to successfully work and promote inclusiveness in small groups. JOB COMPLEXITY: This role requires extensive problem-solving skills and the ability to research an issue, determine the root cause, and implement the resolution; research of various sources such as databricks/AWS/tableau documentation that may be required to identify and resolve issues. Must have the ability to prioritize issues and multi-task. EXPERIENCE/EDUCATION Requires a Bachelors degree in computer science or other related field plus 8+ years of hands-on experience in configuring and managing AWS/tableau and databricks solution. Experience with Databricks and tableau environment is desired. "Remote postings are limited to candidates residing within the country specified in the posting location" About Rackspace Technology We are the multicloud solutions experts. We combine our expertise with the worlds leading technologies across applications, data and security — to deliver end-to-end solutions. We have a proven record of advising customers based on their business challenges, designing solutions that scale, building and managing those solutions, and optimizing returns into the future. Named a best place to work, year after year according to Fortune, Forbes and Glassdoor, we attract and develop world-class talent. Join us on our mission to embrace technology, empower customers and deliver the future. More on Rackspace Technology Though we’re all different, Rackers thrive through our connection to a central goal: to be a valued member of a winning team on an inspiring mission. We bring our whole selves to work every day. And we embrace the notion that unique perspectives fuel innovation and enable us to best serve our customers and communities around the globe. We welcome you to apply today and want you to know that we are committed to offering equal employment opportunity without regard to age, color, disability, gender reassignment or identity or expression, genetic information, marital or civil partner status, pregnancy or maternity status, military or veteran status, nationality, ethnic or national origin, race, religion or belief, sexual orientation, or any legally protected characteristic.
Posted 2 weeks ago
6.0 - 11.0 years
0 - 0 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Databricks Engineer _ Pan India Role & responsibilities Develop and maintain a metadata driven generic ETL framework for automating ETL code Design, build, and optimize ETL/ELT pipelines using Databricks (PySpark/SQL) on AWS . Insure MO rating engine experience required. Ingest data from a variety of structured and unstructured sources (APIs, RDBMS, flat files, streaming). Develop and maintain robust data pipelines for batch and streaming data using Delta Lake and Spark Structured Streaming. Implement data quality checks, validations, and logging mechanisms. Optimize pipeline performance, cost, and reliability. Collaborate with data analysts, BI, and business teams to deliver fit for purpose datasets. Support data modelling efforts (star, snowflake schemas) de norm tables approach and assist with data warehousing initiatives. Work with orchestration tools Databricks Workflows to schedule and monitor pipelines. Follow best practices for version control, CI/CD, and collaborative development Skills Hands-on experience in ETL/Data Engineering roles. Strong expertise in Databricks (Pyspark, SQL, Delta Lake), Databricks Data Engineer Certification Preferred Experience with Spark optimization, partitioning, caching, and handling large-scale datasets. Proficiency in SQL and scripting in Python or Scala. Solid understanding of data lakehouse/medallion architectures and modern data platforms. Experience working with cloud storage systems like AWS S3 Familiarity with DevOps practices Git, CI/CD, Terraform, etc. Strong debugging, troubleshooting, and performance-tuning skills.
Posted 3 weeks ago
4.0 - 6.0 years
5 - 15 Lacs
Bengaluru
Work from Office
About Apexon: Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.Apexon brings together distinct core competencies in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients’ toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents. We enable #HumanFirstDigital Job Title : Databricks ETL Developer Experience : 4–6 Years Location: Hybrid, preferably in Bangalore Job Description: We are seeking a skilled Databricks ETL Developer with 4 to 6 years of experience in building and maintaining scalable data pipelines and transformation workflows on the Azure Databricks platform. Key Responsibilities: Design, develop, and optimize ETL pipelines using Azure Databricks (Spark). Ingest data from various structured and unstructured sources (Azure Data Lake, SQL DBs, APIs). Implement data transformation and cleansing logic in PySpark or Scala. Collaborate with data architects, analysts, and business stakeholders to understand data requirements. Ensure data quality, performance tuning, and error handling in data workflows. Schedule and monitor ETL jobs using Azure Data Factory or Databricks Workflows. Participate in code reviews and maintain coding best practices. Required Skills: Hands-on experience with Azure Databricks, Spark (PySpark/Scala). Strong ETL development experience handling large-scale data. Proficient in SQL and working with relational databases. Familiarity with Azure Data Lake, Data Factory, Delta Lake. Experience with version control tools like Git. Good understanding of data warehousing concepts and data modeling. Preferred: Experience in CI/CD for data pipelines. Exposure to BI tools like Power BI for data validation. Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified™ by Great Place To Work®, the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We take affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com) Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and well-being benefits and assistance. We also offer: o Group Health Insurance covering family of 4 Term Insurance and Accident Insurance Paid Holidays & Earned Leaves Paid Parental LeaveoLearning & Career Development Employee Wellness
Posted 3 weeks ago
6.0 - 11.0 years
10 - 18 Lacs
Bengaluru
Remote
We are looking for experienced DBAs worked on multiple database technologies and cloud migration projects. 6+ years of experience working on SQL/NoSQL/Data warehouse platforms on on-premise and cloud (AWS, Azure & GCP) Provide expert-level guidance on cloud adoption, data migration strategies, and digital transformation projects Strong understanding of RDBMS, NoSQL, Datawarehouse, In Memory and Data Lake architecture, features, and functionalities Proficiency in SQL and data manipulation techniques. Experience with data loading and unloading tools and techniques. Expertise in Data Access Management, Database reliability & scalability and Administer, configure, and optimize database resources and services across the organization Ensure high availability, replication, and failover strategies Implement serverless database architectures for cost-effective, scalable storage Key Responsibilities Strong proficiency in Database administration of one or more databases (Snowflake, BigQuery, Amazon Redshift, Teradata, SAP HANA, Oracle, PostgreSQL, MySQL, SQL Server, Cassandra, MongoDB, Neo4j, Cloudera, Micro Focus, IBM DB2, Elasticsearch, DynamoDB, Azure synapse ) Plan and Execute the On-Prem Database/Analysis Services/Reporting Services/Integration Services Migration to AWS/Azure/GCP Develop automation scripts using Python, Shell Scripting, or Terraform for streamlined database operations. Provide technical guidance and mentoring to junior DBAs and data engineers. Hands-on experience with data modelling, ETL/ELT processes, and data integration tools. Monitoring and optimizing the performance of virtual warehouses, queries, and overall system performance. Optimize database performance through query tuning, indexing, and configuration. Manage replication, backups, and disaster recovery for high availability. Troubleshoot and resolve database issues, including performance bottlenecks, errors, and downtime. Collaborate with the infrastructure team to configure, manage, and monitor PostgreSQL in cloud environments (AWS, GCP, or Azure). Provide on-call support for critical database operations and incidents Provide Level 3 and 4 technical support, troubleshooting complex issues. Participate in cross-functional teams for database design and optimization.
Posted 3 weeks ago
6.0 - 11.0 years
10 - 18 Lacs
Bengaluru
Remote
We are looking for experienced DBAs worked on multiple database technologies and cloud migration projects for our clients worldwide. 6+ years of experience working on SQL/NoSQL/Data warehouse platforms on on-premise and cloud (AWS, Azure & GCP) Provide expert-level guidance on cloud adoption, data migration strategies, and digital transformation projects Strong understanding of RDBMS, NoSQL, Datawarehouse, In Memory and Data Lake architecture, features, and functionalities Proficiency in SQL and data manipulation techniques. Experience with data loading and unloading tools and techniques. Expertise in Data Access Management, Database reliability & scalability and Administer, configure, and optimize database resources and services across the organization Ensure high availability, replication, and failover strategies Implement serverless database architectures for cost-effective, scalable storage Key Responsibilities Strong proficiency in Database administration of one or more databases (Snowflake, BigQuery, Amazon Redshift, Teradata, SAP HANA, Oracle, PostgreSQL, MySQL, SQL Server, Cassandra, MongoDB, Neo4j, Cloudera, Micro Focus, IBM DB2, Elasticsearch, DynamoDB, Azure synapse ) Plan and Execute the On-Prem Database/Analysis Services/Reporting Services/Integration Services Migration to AWS/Azure/GCP Develop automation scripts using Python, Shell Scripting, or Terraform for streamlined database operations. Provide technical guidance and mentoring to junior DBAs and data engineers. Hands-on experience with data modelling, ETL/ELT processes, and data integration tools. Monitoring and optimizing the performance of virtual warehouses, queries, and overall system performance. Optimize database performance through query tuning, indexing, and configuration. Manage replication, backups, and disaster recovery for high availability. Troubleshoot and resolve database issues, including performance bottlenecks, errors, and downtime. Collaborate with the infrastructure team to configure, manage, and monitor PostgreSQL in cloud environments (AWS, GCP, or Azure). Provide on-call support for critical database operations and incidents Provide Level 3 and 4 technical support, troubleshooting complex issues. Participate in cross-functional teams for database design and optimization.
Posted 3 weeks ago
8.0 - 13.0 years
8 - 14 Lacs
Ahmedabad
Remote
Identify leads via networking/research, build client relationships, understand needs, pitch solutions, partner with tech firms (Oracle, Microsoft, Databricks), handle negotiations, support, and track sales while staying updated on trends. Perks and benefits Salary + Sales Incentives
Posted 3 weeks ago
5.0 - 7.0 years
3 - 8 Lacs
Pune
Work from Office
Role & responsibilities Clickflow is seeking a meticulous and technically proficient Data Tester to join our growing data analytics team. The ideal candidate will be responsible for ensuring the quality, accuracy, and integrity of our data solutions, with a strong focus on ETL processes, data warehousing, and Power BI reports built on the Microsoft Azure platform. In this role, you will be a critical player in our data governance and quality assurance framework. You will collaborate closely with data engineers, data analysts, and business stakeholders to validate data pipelines and ensure that our data-driven insights are reliable and accurate. If you have a passion for data, a keen eye for detail, and a strong background in Azure technologies, we encourage you to apply. Key responsibilities ETL and Data Pipeline Testing: Design, develop, and execute comprehensive test plans, test cases, and test scripts for ETL processes. Validate data extraction, transformation, and loading logic to ensure data accuracy, completeness, and consistency from source to target systems. Data Solution Validation: Perform end-to-end testing of data warehousing solutions and data models. Verify data integrity and quality across various data stores within the Azure ecosystem. Power BI Report Testing: Meticulously test Power BI reports and dashboards for data accuracy, functionality, performance, and visual representation. Validate calculations, filters, and slicers to ensure they meet business requirements. Azure Technology Expertise: Leverage your knowledge of the Azure data stack, including Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure SQL Database, to perform thorough testing. SQL Proficiency: Write and execute complex SQL queries for data validation, reconciliation, and to identify data anomalies. Defect Management: Identify, document, and track defects in a clear and concise manner using defect tracking tools. Work closely with the development team to facilitate the resolution of identified issues. Test Automation: Contribute to the development and maintenance of automated testing frameworks to improve the efficiency and coverage of data testing. Collaboration and Communication: Actively participate in agile ceremonies and effectively communicate testing progress, results, and risks to project stakeholders. Preferred candidate profile Proven Experience: 5 years of experience in a Data Tester, ETL Tester, or similar quality assurance role. ETL Testing Expertise: Strong understanding of ETL concepts and hands-on experience in testing ETL pipelines and data transformations. Microsoft Azure Proficiency: Demonstrable experience with Azure data services such as: Azure Data Factory (ADF) Azure Synapse Analytics Azure Databricks Azure SQL Database/Managed Instance Azure Data Lake Storage (ADLS) Power BI Knowledge: Experience in testing Power BI reports and dashboards, with a good understanding of DAX. Advanced SQL Skills: The ability to write complex SQL queries to query and validate large datasets is essential. Analytical and Problem-Solving Skills: A sharp analytical mind with the ability to troubleshoot complex data issues. Attention to Detail: A meticulous approach to testing to ensure the highest level of data quality. Communication Skills: Excellent verbal and written communication skills, with the ability to articulate technical issues to both technical and non-technical audiences. Experience with test management tools (e.g., Azure DevOps, Jira). Knowledge of data warehousing concepts (e.g., dimensional modeling). Familiarity with scripting languages such as Python for test automation. Relevant Microsoft Azure certifications (e.g., Azure Data Engineer Associate, Azure Fundamentals).
Posted 3 weeks ago
0.0 - 5.0 years
2 - 6 Lacs
Ameerpet
Work from Office
Key Responsibilities: Work closely with Data Engineers and ML Engineers to ensure data readiness and model deployment. Present findings and insights to stakeholders using visualizations and dashboards. Productionization & Monitoring: Assist in transitioning models from development to production. Monitor model drift and performance over time. Data Exploration & Querying: Use Databricks SQL for querying and analysing data. Document data lineage and ensure reproducibility Exploratory Data Analysis (EDA): Use Databricks Notebooks (Python, R) to explore structured and unstructured data. Perform hypothesis testing and statistical profiling. Feature Engineering : Create and manage features using Databricks Feature Store. Transform raw data into meaningful inputs for models. Model Development & Training: for fraud claims identification, loss prediction modelling . Build and train models using MLlib, scikit-learn, TensorFlow, or PyTorch. Utilize Databricks Runtime for ML for optimized performance. Experiment Tracking & Model Evaluation: Track experiments and model metrics using MLflow . Evaluate model performance using cross-validation and business KPIs. Collaboration & Communication: Required Skills : Strong programming in Python or R Proficiency in machine learning algorithms and statistical methods Experience with big data processing using Spark Familiarity with cloud platforms (AWS, Azure, GCP) Excellent communication and storytelling skills Tools & Technologies Used: Databricks Notebooks Apache Spark (via PySpark or SparkR) MLflow (Tracking, Registry, Deployment) Feature Store Delta Lake Databricks SQL Visualization Tools (e.g., matplotlib, seaborn, Power BI)
Posted 4 weeks ago
5.0 - 10.0 years
15 - 30 Lacs
Mumbai, Hyderabad, Chennai
Hybrid
Primarily looking for a Data Engineer with expertise in processing data pipelines using Databricks PySpark SQL on Cloud distributions like AWS Must have AWS Databricks Good to have PySpark Snowflake Talend Requirements- Candidate must be experienced working in projects involving Other ideal qualifications include experiences in Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python Familiarity with AWS compute storage and IAM concepts Experience in working with S3 Data Lake as the storage tier Any ETL background Talend AWS Glue etc. is a plus but not required Cloud Warehouse experience Snowflake etc. is a huge plus Carefully evaluates alternative risks and solutions before taking action. Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. Experience on Shell scripting Exceptionally strong analytical and problem-solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and cross functional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Mandatory Skills : Apache Spark, Databricks, Java, Python, Scala, Spark SQL.
Posted 1 month ago
6.0 - 11.0 years
6 - 16 Lacs
Bengaluru
Work from Office
Responsibilities: Collaborate with cross-functional teams on MuleSoft integrations Ensure data accuracy through testing and validation processes Design, develop & maintain ETL solutions using Talend
Posted 1 month ago
7.0 - 9.0 years
14 - 20 Lacs
Pune
Work from Office
Strong proficiency in SQL (Structured Query Language)querying, manipulating and optimizing dataExperience in ETL development informatica Data warehousing, ADF,GCP,Databricks Extensive experience with popular ETL tools Required Candidate profile 2+ years of experience in Infomatica, complex SQL queries SQL: Oracle, MS SQL, Teradata, Netezza • ETL: Informatica Power Centre (Must) • Cloud: ADF OR Databricks OR GCP or Google Data Proc
Posted 1 month ago
6.0 - 11.0 years
12 - 22 Lacs
Bengaluru
Remote
Job Summary We are looking for a highly skilled Cloud Engineer with a strong background in real-time and batch data ingestion and data processing, azure products-devops, azure cloud. The ideal candidate should have a deep understanding of streaming architectures and performance optimization techniques in cloud environments, preferably in subsurface domain. Key Responsibilities Automation experience essential : Scripting, using PowerShell. ARM Templates, using JSON (PowerShell also acceptable) Azure DevOps with CI/CD, Site Reliability Engineering Must be able to understand the concept of how the applications function. The ability to priorities workload and operate across several initiatives simultaneously Update and maintain the Kappa-Automate database and connectivity with the pi historian and data lake Participate in troubleshooting, performance tuning, and continuous improvement of the Kappa Automate platform Designing and implementing highly configurable Deployment pipelines in Azure Configuring Delta Lake on Azure Databricks Apply performance tuning techniques such as partitioning, caching, and cluster Working on various Azure storage types Work with large volumes of structured and unstructured data, ensuring high availability and performance. Collaborate with cross-functional teams (data scientists, analysts, business users) Qualifications • Bachelors or Masters degree in Computer Science, Information Technology, or a related field. • 8+ years of experience in data engineering or a related role. • Proven experience with Azure technologies.
Posted 1 month ago
3.0 - 8.0 years
6 - 14 Lacs
Ahmedabad
Work from Office
Role & responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Preferred candidate profile Bachelor's and/or masters degree in computer science or equivalent experience. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough