Home
Jobs

3345 Databricks Jobs - Page 49

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

6 - 7 Lacs

Bengaluru

On-site

GlassDoor logo

About Tarento: Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions. We're proud to be recognized as a Great Place to Work , a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose. Role Overview An Azure Data Engineer specializing in Databricks is responsible for designing, building, and maintaining scalable data solutions on the Azure cloud platform, with a focus on leveraging Databricks and related big data technologies. The role involves close collaboration with data scientists, analysts, and software engineers to ensure efficient data processing, integration, and delivery for analytics and business intelligence needshttps://en.wizbii.com/company/dxc-technology/job/azure-data-bricks-developerhttps://www.expertia.ai/blogs/jd/data-engineer-databricks-job-description-12901ghttps://www.lafosse.com/job-description/azure-data-engineer-job-description/. Key Responsibilities Design, develop, and maintain robust and scalable data pipelines using Azure Databricks, Azure Data Factory, and other Azure services. Build and optimize data architectures to support large-scale data processing and analytics. Collaborate with cross-functional teams to gather requirements and deliver data solutions tailored to business needs. Ensure data quality, integrity, and security across various data sources and pipelines. Implement data governance, compliance, and best practices for data security (e.g., encryption, RBAC). Monitor, troubleshoot, and optimize data pipeline performance, ensuring reliability and scalability. Document technical specifications, data pipeline processes, and architectural decisions Support and troubleshoot data workflows, ensuring consistent data delivery and availability for analytics and reporting Automate data tasks and deploy production-ready code using CI/CD practices Stay updated with the latest Azure and Databricks features, recommending improvements and adopting new tools as appropriate Required Skills and Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field 5+ years of experience in data engineering, with hands-on expertise in Azure and Databricks environments Proficiency in Databricks, Apache Spark, and Spark SQL Strong programming skills in Python and/or Scala Advanced SQL skills and experience with relational and NoSQL databases Experience with ETL processes, data warehousing concepts, and big data technologies (e.g., Hadoop, Kafka) Familiarity with Azure services: Azure Data Lake Storage (ADLS), Azure Data Factory, Azure SQL Data Warehouse, Cosmos DB, Azure Stream Analytics, Azure Functions Understanding of data modeling, schema design, and data integration best practices Strong analytical, problem-solving, and troubleshooting abilities Experience with source code control systems (e.g., GIT) and technical documentation tools Excellent communication and collaboration skills; ability to work both independently and as part of a team Preferred Skills Experience with automation, unit testing, and CI/CD pipelines Certifications in Azure Data Engineering or Databricks are advantageous Soft Skills Flexible, self-starter, and proactive in learning and adopting new technologies Ability to manage multiple priorities and work to tight deadlines Strong stakeholder management and teamwork capabilities

Posted 1 week ago

Apply

5.0 years

1 - 9 Lacs

Bengaluru

On-site

GlassDoor logo

Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. Cloud App and Identity Research (CAIR) team is leading the security research of Microsoft Defender for Cloud Apps. We are working on the edge technology of AI and Cloud. Researchers in the team are world class experts in cloud related threats, they are talented and enthusiastic employees. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond Responsibilities Build algorithms and innovative methods to discover and defend real world sophisticated cloud-based attacks in SaaS ecosystem. Collaborate with other data scientists to develop machine learning systems for detecting anomalies, compromises, fraud, and non-human identity cyber-attacks using both Gen AI and graph-based systems. Identify, integrate multiple data sources, or types of data, and develop expertise with multiple data sources to tell a story,identify new patterns and business opportunities, and communicate visually and verbally with clear and compelling data-driven stories. Analyze extensive datasets and develop a robust, scalable feature engineering pipeline within a PySpark-based environment. · Acquires and uses broad knowledge of innovative methods, algorithms, and tools from within Microsoft and from the scientific literature and applies his or her own analysis of scalability and applicability to the formulated problem. Work across Threat Researchers, engineering, and product teams to enable metrics for product success. Contribute to active engagement with the security ecosystem through Research papers, presentations, and blogs. Provide subject matter expertise to customers based on industry attack trends and product capabilities. Qualifications 5+ years of programming language experience like C/C++/C#/Python required and hands on experience in using technologies such as Spark, Azure ML, SQL, KQL, Databricks, etc. Able to prepare data pipelines and feature engineering pipelines to build robust models using SQL, PySpark, Azure Data Studio etc. Knowledge of Classification, Prediction, Anomaly Detection, Optimization, Graph ML, NLP · Candidate must be comfortable in manipulating and analyzing complex, high dimensional data from various sources to solve difficult problems. Knowledge of working in cloud-computing environment like Azure / AWS / Google Cloud. · Proficient in Relational Databases (SQL), Big Data Technologies (PySpark). Azure storage technologies such as ADLS, cosmos DB, etc. Generative AI experience is a plus · Bachelor's or higher degrees in Computer Science, Statistics, Mathematics, Engineering, or related disciplines. Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 week ago

Apply

10.0 years

7 - 10 Lacs

Vadodara

On-site

GlassDoor logo

About Rearc Founded in 2016, we pride ourselves on fostering an environment where creativity flourishes, bureaucracy is non-existent, and individuals are encouraged to challenge the status quo. We're not just a company; we're a community of problem-solvers dedicated to improving the lives of fellow software engineers. Our commitment is simple - finding the right fit for our team and cultivating a desire to make things better. If you're a cloud professional intrigued by our problem space and eager to make a difference, you've come to the right place. Join us, and let's solve problems together! As a Lead Data Engineer at Rearc, you'll play a pivotal role in establishing and maintaining technical excellence within our data engineering team. Your deep expertise in data architecture, ETL processes, and data modelling will be instrumental in optimizing data workflows for efficiency, scalability, and reliability. You'll collaborate closely with cross-functional teams to design and implement robust data solutions that meet business objectives and adhere to best practices in data management. Building strong partnerships with both technical teams and stakeholders will be essential as you drive data-driven initiatives and ensure their successful implementation. What You Bring With 10+ years of experience in data engineering, data architecture, or related fields, you offer a wealth of expertise in managing and optimizing data pipelines and architectures. Extensive experience in writing and testing Java and/or Python Proven experience with data pipeline orchestration using platforms such as Airflow, Databricks, DBT or AWS Glue. Hands-on experience with data analysis tools and libraries like Pyspark, NumPy, Pandas, or Dask. Proficiency with Spark and Databricks is highly desirable. You have a proven track record of leading complex data engineering projects, including designing and implementing scalable data solutions. Your hands-on experience with ETL processes, data warehousing, and data modeling tools allows you to deliver efficient and robust data pipelines. You possess in-depth knowledge of data integration tools and best practices. Your strong understanding of cloud-based data services and technologies (e.g., AWS Redshift, Azure Synapse Analytics, Google BigQuery). You bring strong strategic and analytical skills to the role, enabling you to solve intricate data challenges and drive data-driven decision-making. Proven proficiency in implementing and optimizing data pipelines using modern tools and frameworks, including Databricks for data processing and Delta Lake for managing large-scale data lakes. Your exceptional communication and interpersonal skills facilitate collaboration with cross-functional teams and effective stakeholder engagement at all levels. What You’ll Do As a Lead Data Engineer at Rearc, your role is pivotal in driving the success of our data engineering initiatives. You will lead by example, fostering trust and accountability within your team while leveraging your technical expertise to optimize data processes and deliver exceptional data solutions. Here's what you'll be doing: Understand Requirements and Challenges : Collaborate with stakeholders to deeply understand their data requirements and challenges, enabling the development of robust data solutions tailored to the needs of our clients. Implement with a DataOps Mindset : Embrace a DataOps mindset and utilize modern data engineering tools and frameworks, such as Apache Airflow, Apache Spark, or similar, to build scalable and efficient data pipelines and architectures. Lead Data Engineering Projects : Take the lead in managing and executing data engineering projects, providing technical guidance and oversight to ensure successful project delivery. Mentor Data Engineers : Share your extensive knowledge and experience in data engineering with junior team members, guiding and mentoring them to foster their growth and development in the field. Promote Knowledge Sharing : Contribute to our knowledge base by writing technical blogs and articles, promoting best practices in data engineering, and contributing to a culture of continuous learning and innovation. At Rearc, we're committed to empowering engineers to build awesome products and experiences. Success as a business hinges on our people's ability to think freely, challenge the status quo, and speak up about alternative problem-solving approaches. If you're an engineer driven by the desire to solve problems and make a difference, you're in the right place! Our approach is simple — empower engineers with the best tools possible to make an impact within their industry. We're on the lookout for engineers who thrive on ownership and freedom, possessing not just technical prowess, but also exceptional leadership skills. Our ideal candidates are hands-on-keyboard leaders who don't just talk the talk but also walk the walk, designing and building solutions that push the boundaries of cloud computing.

Posted 1 week ago

Apply

8.0 years

7 - 10 Lacs

Vadodara

On-site

GlassDoor logo

At Rearc, we're committed to empowering engineers to build awesome products and experiences. Success as a business hinges on our people's ability to think freely, challenge the status quo, and speak up about alternative problem-solving approaches. If you're an engineer driven by the desire to solve problems and make a difference, you're in the right place! Our approach is simple — empower engineers with the best tools possible to make an impact within their industry. We're on the lookout for engineers who thrive on ownership and freedom, possessing not just technical prowess, but also exceptional leadership skills. Our ideal candidates are hands-on leaders who don't just talk the talk but also walk the walk, designing and building solutions that push the boundaries of cloud computing. As a Senior Data Engineer at Rearc, you will be at the forefront of driving technical excellence within our data engineering team. Your expertise in data architecture, cloud-native solutions, and modern data processing frameworks will be essential in designing workflows that are optimized for efficiency, scalability, and reliability. You'll leverage tools like Databricks, PySpark, and Delta Lake to deliver cutting-edge data solutions that align with business objectives. Collaborating with cross-functional teams, you will design and implement scalable architectures while adhering to best practices in data management and governance . Building strong relationships with both technical teams and stakeholders will be crucial as you lead data-driven initiatives and ensure their seamless execution. What You Bring 8+ years of experience in data engineering, showcasing expertise in diverse architectures, technology stacks, and use cases. Strong expertise in designing and implementing data warehouse and data lake architectures, particularly in AWS environments. Extensive experience with Python for data engineering tasks, including familiarity with libraries and frameworks commonly used in Python-based data engineering workflows. Proven experience with data pipeline orchestration using platforms such as Airflow, Databricks, DBT or AWS Glue. Hands-on experience with data analysis tools and libraries like Pyspark, NumPy, Pandas, or Dask. Proficiency with Spark and Databricks is highly desirable. Experience with SQL and NoSQL databases, including PostgreSQL, Amazon Redshift, Delta Lake, Iceberg and DynamoDB. In-depth knowledge of data architecture principles and best practices, especially in cloud environments. Proven experience with AWS services, including expertise in using AWS CLI, SDK, and Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or AWS CDK. Exceptional communication skills, capable of clearly articulating complex technical concepts to both technical and non-technical stakeholders. Demonstrated ability to quickly adapt to new tasks and roles in a dynamic environment. What You'll Do Strategic Data Engineering Leadership : Provide strategic vision and technical leadership in data engineering, guiding the development and execution of advanced data strategies that align with business objectives. Architect Data Solutions : Design and architect complex data pipelines and scalable architectures, leveraging advanced tools and frameworks (e.g., Apache Kafka, Kubernetes) to ensure optimal performance and reliability. Drive Innovation : Lead the exploration and adoption of new technologies and methodologies in data engineering, driving innovation and continuous improvement across data processes. Technical Expertise : Apply deep expertise in ETL processes, data modelling, and data warehousing to optimize data workflows and ensure data integrity and quality. Collaboration and Mentorship : Collaborate closely with cross-functional teams to understand requirements and deliver impactful data solutions—mentor and coach junior team members, fostering their growth and development in data engineering practices. Thought Leadership : Contribute to thought leadership in the data engineering domain through technical articles, conference presentations, and participation in industry forums. Some More About Us Founded in 2016, we pride ourselves on fostering an environment where creativity flourishes, bureaucracy is non-existent, and individuals are encouraged to challenge the status quo. We're not just a company; we're a community of problem-solvers dedicated to improving the lives of fellow software engineers. Our commitment is simple - finding the right fit for our team and cultivating a desire to make things better. If you're a cloud professional intrigued by our problem space and eager to make a difference, you've come to the right place. Join us, and let's solve problems together!

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Us: ArcelorMittal was formed in 2006 from the strategic merger of European company Arcelor and Indian-owned Mittal Steel. Over a journey of two decades, we have emerged as the world's leading steel and mining company, exerting our influence across 60+ countries with a robust industrial footprint in 18. We are a global team of 158,00+ talented individuals committed to building a better world with smarter low-carbon steel. Our strategies are not just about scale; they're also about leading a transformative change where innovation meets sustainability. We supply to major global markets—from automotive and construction to household appliances and packaging—supported by world-class R&D and distribution networks. ArcelorMittal Global Business and Technologies in India is our new hub of technological innovation and business solutions. Here, you'll find a thriving community of business professionals and technologists who bring together diverse and unique perspectives and experiences to disrupt the global steel manufacturing industry. This fusion ignites groundbreaking ideas and unlocks new avenues for sustainable business growth. We nurture a culture fueled by an entrepreneurial spirit and a passion for excellence, which prioritizes the advancement and growth of our team members. With flexible career pathways and access to the latest technology and business tools, we offer a space where you can learn, take ownership, and face exciting challenges every day. Job Title: Assistant Team Lead- Basis & Admin Position Summary: We are seeking a proactive and technically skilled Assistant Team Lead to support and maintain enterprise solutions in Azure and SAP. The role focuses on ensuring availability, and system performance across a complex landscape that includes Azure Data Lake , Databricks , Synapse Analytics , and SAP BW . This position plays a critical role in supporting operational visibility, security, efficiency, and decision-making. Key Responsibilities: Reporting Application Maintenance: Monitor and maintain reporting applications and dashboards used by CMO and Supply Chain teams. Ensure timely data refreshes, system availability, and issue resolution for daily operational and strategic reporting. Manage application incidents, perform root cause analysis, and drive permanent fixes in collaboration with data and IT teams. Coordinate testing and validation after data source changes, infrastructure upgrades, or new releases. Technical Platform Support: Maintain integrations and data flows between SAP BW on HANA , Azure Data Lake , Databricks , and Azure Synapse . Support performance tuning and optimization of queries and data pipelines to meet reporting SLAs. Collaborate with data engineers and developers to ensure robust and scalable reporting data models. Business Engagement: Work closely with Commercial, Supply Chain, Manufacturing, and Quality stakeholders to ensure reporting tools meet evolving business needs. Translate business reporting issues into technical resolutions and enhancements. Support user adoption and troubleshoot front-end issues related to dashboards and KPIs. Governance & Documentation: Ensure compliance with data governance, security, and change management standards. Maintain documentation for reporting processes, data lineage, access controls, and application architecture. Required Qualifications: Bachelor’s degree in Information Systems, Computer Science, or related field. 6-8 years of experience in BI/reporting application support, preferably in manufacturing or supply chain contexts. Proven experience managing a small team across multiple functions or geographies. Strong hands-on knowledge of Azure Data Lake , Databricks , Azure Synapse Analytics , and SAP BW on HANA . Strong experience on working with IT Service Management (ITSM) platforms (such as ServiceNow) Experience with BI tools such as Power BI , Tableau , or similar platforms. Understanding of data pipelines, data modeling, and ETL/ELT processes. Excellent problem-solving skills and the ability to work across technical and business teams . Preferred Qualifications: Certifications in Azure (e.g., Azure Administrator, Data Engineer Associate), SAP, or ITIL. Experience working in Agile or DevOps-driven environments. Familiarity with commercial business processes, production planning, or supply chain operations. Working knowledge of scripting in SQL, Python (Databricks), or Spark. Exposure to Agile environments and DevOps for data platforms. What We Offer: A critical role in maintaining business-critical reporting tools for global operations Collaboration with both IT and frontline manufacturing/supply chain teams Access to modern data platforms and advanced analytics tools Competitive compensation, benefits, and career development Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Skills: sql,pl/sql,spark,star and snowflake dimensional modeling,databricks,snowsight,terraform,git,unix shell scripting,snowsql,cassandra,circleci,azure,pyspark,snowpipe,mongodb,neo4j,azure data factory,snowflake,python Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Skills: sql,pl/sql,spark,star and snowflake dimensional modeling,databricks,snowsight,terraform,git,unix shell scripting,snowsql,cassandra,circleci,azure,pyspark,snowpipe,mongodb,neo4j,azure data factory,snowflake,python Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Reference # 320191BR Job Type Full Time Your role Do you want to play a pivotal role in the design, implementation, and successful delivery of our Planning & Forecasting platform? Do you have a proven track record in delivering analytical applications based on a TM1 architecture for global Financial Institutions? Are you experienced in working in a global team, delivering on frequent release cycles, according to Agile software development principles and DevOps best practices? We are looking for a software engineer to: be responsible for maintaining and evolving the current financial planning & forecasting applications participate in a data-driven culture with predictive analytics and business intelligence at the end user’s fingertips work effectively with a global team of software engineers, product specialists and business stakeholders take ownership, be proactive and demonstrate perseverance in problem solving embrace the complex business requirements and enjoy the challenge of implementing them work in iterations, according to the Agile methodology manage stakeholder expectations through transparent communication be ready to learn new technologies as business requirements evolve deliver fully automated solutions, embracing CI/CD Your team You will be working in the Group Finance Technology organization in Pune or Mumbai. As part of the larger Group Functions Technology organization, Group Finance delivers quality, innovative solutions that support our business partners in achieving their operational goals. Technology is at the very heart of UBS. As a team of thousands, we have a critical role to play in building, delivering, and maintaining the systems, services and infrastructure that power our business. Technology is about people and every person has a crucial role to play on the UBS Technology team. Your expertise 5+ years of experience as a TM1 software engineer working in a Finance Planning environment with Bachelor’s / Master’s degree or equivalent essential IBM Planning Analytics technical skills including Turbo Integrator, advanced cube rules, conditional feeders, REST API and PAfE strong knowledge of Excel / VBA as well as batch scripting (e.g. PowerShell) experience of Python (including TM1Py), Apliqo UX (Cubewise) and Azure Kubernetes is beneficial broader experience of Apache Spark (Databricks), Business Intelligence (Power BI) and Machine Learning also beneficial strong analytical, problem-solving and synthesizing skills (you know how to figure stuff out) able to produce secure and clean code that is stable, operational, consistent and well-performing experience with Agile Methodology (Scrum), including the use of tools such as GitLab About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors.. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we embrace flexible ways of working when the role permits. We offer different working arrangements like part-time, job-sharing and hybrid (office and home) working. Our purpose-led culture and global infrastructure help us connect, collaborate, and work together in agile ways to meet all our business needs. From gaining new experiences in different roles to acquiring fresh knowledge and skills, we know that great work is never done alone. We know that it's our people, with their unique backgrounds, skills, experience levels and interests, who drive our ongoing success. Together we’re more than ourselves. Ready to be part of #teamUBS and make an impact? Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce. Your Career Comeback We are open to applications from career returners. Find out more about our program on ubs.com/careercomeback. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Skills: sql,pl/sql,spark,star and snowflake dimensional modeling,databricks,snowsight,terraform,git,unix shell scripting,snowsql,cassandra,circleci,azure,pyspark,snowpipe,mongodb,neo4j,azure data factory,snowflake,python Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

This role is for one of our clients Industry: Technology, Information and Media Seniority level: Mid-Senior level Min Experience: 6 years Location: mumbai JobType: full-time We are looking for a seasoned Lead Data Visualization Specialist to join our growing Data & Analytics team. This role is perfect for a visualization expert who thrives at the intersection of design and data. You will lead the development of high-impact dashboards and data storytelling experiences that help stakeholders make strategic business decisions. Working closely with data engineers, analysts, and business users, you will transform complex data into actionable insights using tools like Power BI, Tableau, and Qlik Sense , while also leveraging Azure Databricks , PySpark , and SQL to manage the backend data workflows. What You’ll Do: Data Visualization & Dashboard Development Build sleek, interactive, and scalable dashboards using Power BI, Tableau, and Qlik Sense. Develop intuitive layouts and user journeys for business users to explore KPIs and trends. Embed visual storytelling principles to make data interpretation easy and insightful. Data Integration & Modeling Collaborate with data engineers to clean, shape, and model data from diverse sources. Use SQL and PySpark to query, transform, and enrich data pipelines. Manage complex datasets within Microsoft Azure cloud environments, including Databricks and Data Factory. Performance & Optimization Design dashboards optimized for speed, usability, and enterprise-scale data volumes. Troubleshoot performance bottlenecks and enhance backend queries or models accordingly. Stakeholder Engagement & Insight Delivery Work with cross-functional teams to understand business needs and translate them into analytical visuals. Present data-driven insights to non-technical audiences, tailoring messaging to various stakeholders. Governance, Standards & Mentorship Champion visualization standards and data governance best practices. Mentor junior visualizers and analysts on tools, techniques, and storytelling principles. Help define scalable templates and reusable components for the organization. What You Bring: 6+ years of experience in data visualization or business intelligence roles. Mastery of at least two of the following tools: Power BI , Tableau , Qlik Sense . Strong SQL capabilities and hands-on experience with PySpark for large-scale data processing. Deep knowledge of the Azure data ecosystem , including Databricks , Azure Synapse , and Data Factory . Proven ability to translate raw data into powerful, intuitive stories through visuals. Strong grasp of UX principles as applied to data dashboards. Ability to work autonomously and manage multiple stakeholders and priorities. Excellent verbal and written communication skills. Bonus Points for: Certifications in Power BI, Tableau, or Microsoft Azure. Experience in predictive modeling, trend analysis, or machine learning environments. Exposure to agile methodologies and product-based data teams. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Senior Engineer, Data Modeling Gurgaon/Bangalore, India AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable - enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained industrious advantage. Our Chief Data Office also known as our Innovation, Data Intelligence & Analytics team (IDA) is focused on driving innovation through optimizing how we leverage data to drive strategy and create a new business model - disrupting the insurance market. As we develop an enterprise-wide data and digital strategy that moves us toward greater focus on the use of data and data-driven insights, we are seeking a Data Engineer. The role will support the team’s efforts towards creating, enhancing, and stabilizing the Enterprise data lake through the development of the data pipelines. This role requires a person who is a team player and can work well with team members from other disciplines to deliver data in an efficient and strategic manner. What You’ll Be Doing What will your essential responsibilities include? Act as a data engineering expert and partner to Global Technology and data consumers in controlling complexity and cost of the data platform, whilst enabling performance, governance, and maintainability of the estate. Understand current and future data consumption patterns, architecture (granular level), partner with Architects to make sure optimal design of data layers. Apply best practices in Data architecture. For example, balance between materialization and virtualization, optimal level of de-normalization, caching and partitioning strategies, choice of storage and querying technology, performance tuning. Leading and hands-on execution of research into new technologies. Formulating frameworks for assessment of new technology vs business benefit, implications for data consumers. Act as a best practice expert, blueprint creator of ways of working such as testing, logging, CI/CD, observability, release, enabling rapid growth in data inventory and utilization of Data Science Platform. Design prototypes and work in a fast-paced iterative solution delivery model. Design, Develop and maintain ETL pipelines using Py spark in Azure Databricks using delta tables. Use Harness for deployment pipeline. Monitor Performance of ETL Jobs, resolve any issue that arose and improve the performance metrics as needed. Diagnose system performance issue related to data processing and implement solution to address them. Collaborate with other teams to make sure successful integration of data pipelines into larger system architecture requirement. Maintain integrity and quality across all pipelines and environments. Understand and follow secure coding practice to make sure code is not vulnerable. You will report to the Application Manager. What You Will BRING We’re looking for someone who has these abilities and skills: Required Skills And Abilities Effective Communication skills. Bachelor’s degree in computer science, Mathematics, Statistics, Finance, related technical field, or equivalent work experience. Relevant years of extensive work experience in various data engineering & modeling techniques (relational, data warehouse, semi-structured, etc.), application development, advanced data querying skills. Relevant years of programming experience using Databricks. Relevant years of experience using Microsoft Azure suite of products (ADF, synapse and ADLS). Solid knowledge on network and firewall concepts. Solid experience writing, optimizing and analyzing SQL. Relevant years of experience with Python. Ability to break complex data requirements and architect solutions into achievable targets. Robust familiarity with Software Development Life Cycle (SDLC) processes and workflow, especially Agile. Experience using Harness. Technical lead responsible for both individual and team deliveries. Desired Skills And Abilities Worked in big data migration projects. Worked on performance tuning both at database and big data platforms. Ability to interpret complex data requirements and architect solutions. Distinctive problem-solving and analytical skills combined with robust business acumen. Excellent basics on parquet files and delta files. Effective Knowledge of Azure cloud computing platform. Familiarity with Reporting software - Power BI is a plus. Familiarity with DBT is a plus. Passion for data and experience working within a data-driven organization. You care about what you do, and what we do. Who WE Are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What We OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and a diverse workforce enable business growth and are critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most diverse workforce possible, and create an inclusive culture where everyone can bring their full selves to work and can reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe Robust support for Flexible Working Arrangements Enhanced family friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides dynamic compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Title: Data Scientist Location: Bangalore Reporting to: Manager/Senior Manager - Analytics 1. Purpose of the role Contributing to the Data Science efforts of AB InBevʼs global non-commercial analytics capability of Supply Analytics. Candidate will be required to contribute and may also need to guide the DS team staffed on the area and assess the efforts required to scale and standardize the use of Data Science across multiple ABI markets 2. KEY TASKS AND ACCOUNTABILITIES Understand the business problem and translate that to an analytical problem; participate in the solution design process. Manage the full AI/ML lifecycle, including data preprocessing, feature engineering, model training, validation, deployment, and monitoring. Develop reusable and modular Python code adhering to OOP (Object-Oriented Programming) principles. Design, develop, and deploy machine learning models into production environments on Azure. Collaborate with data scientists, software engineers, and other stakeholders to meet business needs. Ability to communicate findings clearly to both technical and business stakeholders. 3. Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) B. Tech /BE/ Masters in CS/IS/AI/ML Previous work experience required Minimum 3 years of relevant experience Technical skills required Must Have Strong expertise in Python, including advanced knowledge of OOP concepts. Exposure to AI/ML methodologies with a previous hands-on experience in ML concepts like forecasting, clustering, regression, classification, optimization using Python Azure Tech Stack, Databricks, ML Flow in any cloud platform Airflow for orchestrating and automating workflows MLOPS concepts and containerization tools like Docker Experience with version control tools such as Git. Consistently display an intent for problem solving Strong communication skills (vocal and written) Ability to effectively communicate and present information at various levels of an organization. Good To Have Preferred industry exposure in Manufacturing Domain Product building experience Other Skills required Passion for solving problems using data Detail oriented, analytical and inquisitive Ability to learn on the go Ability to work independently and with others And above all of this, an undying love for beer! We dream big to create future with more cheers Show more Show less

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Responsible for developing, optimize, and maintaining business intelligence and data warehouse systems, ensuring secure, efficient data storage and retrieval, enabling self-service data exploration, and supporting stakeholders with insightful reporting and analysis. Grade - T5 Please note that the Job will close at 12am on Posting Close date, so please submit your application prior to the Close Date Accountabilities What your main responsibilities are: Data Pipeline - Develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity Data Integration - Connect offline and online data to continuously improve overall understanding of customer behavior and journeys for personalization. Data pre-processing including collecting, parsing, managing, analyzing and visualizing large sets of data Data Quality Management - Cleanse the data and improve data quality and readiness for analysis. Drive standards, define and implement/improve data governance strategies and enforce best practices to scale data analysis across platforms Data Transformation - Processes data by cleansing data and transforming them to proper storage structure for the purpose of querying and analysis using ETL and ELT process Data Enablement - Ensure data is accessible and useable to wider enterprise to enable a deeper and more timely understanding of operation. Qualifications & Specifications Masters /Bachelor’s degree in Engineering /Computer Science/ Math/ Statistics or equivalent. Strong programming skills in Python/Pyspark/SAS. Proven experience with large data sets and related technologies – Hadoop, Hive, Distributed computing systems, Spark optimization. Experience on cloud platforms (preferably Azure) and it's services Azure Data Factory (ADF), ADLS Storage, Azure DevOps. Hands-on experience on Databricks, Delta Lake, Workflows. Should have knowledge of DevOps process and tools like Docker, CI/CD, Kubernetes, Terraform, Octopus. Hands-on experience with SQL and data modeling to support the organization's data storage and analysis needs. Experience on any BI tool like Power BI (Good to have). Cloud migration experience (Good to have) Cloud and Data Engineering certification (Good to have) Working in an Agile environment 4-6 Years Of Relevant Work Experience Is Required. Experience with stakeholder management is an added advantage. What We Are Looking For Education: Bachelor's degree or equivalent in Computer Science, MIS, Mathematics, Statistics, or similar discipline. Master's degree or PhD preferred. Knowledge, Skills And Abilities Fluency in English Analytical Skills Accuracy & Attention to Detail Numerical Skills Planning & Organizing Skills Presentation Skills Data Modeling and Database Design ETL (Extract, Transform, Load) Skills Programming Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace. Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

At Capgemini Engineering, the world leader in engineering services, we bring together a global team of engineers, scientists, and architects to help the world’s most innovative companies unleash their potential. From autonomous cars to life-saving robots, our digital and software technology experts think outside the box as they provide unique R&D and engineering services across all industries. Join us for a career full of opportunities. Where you can make a difference. Where no two days are the same. Your Role As a senior software engineer with Capgemini, you will have 6 + years of experience in Azure technology with strong project track record In this role you will play a key role in: Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence and entrepreneurial spirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority Your Profile Experience with Azure Data Bricks, Data Factory Experience with Azure Data components such as Azure SQL Database, Azure SQL Warehouse, SYNAPSE Analytics Experience in Python/Pyspark/Scala/Hive Programming Experience with Azure Databricks/ADB is must have Experience with building CI/CD pipelines in Data environments Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior DevOps Engineer – Azure Location: Ahmedabad Experience: 4+ years (Relevant DevOps Experience) Employment Type: Full Time Department : DevOps About Simform: Simform is a premier digital engineering company specializing in Cloud, Data, AI/ML, and Experience Engineering to create seamless digital experiences and scalable products. Simform is a strong partner for Microsoft, AWS, Google Cloud, and Databricks. With a presence in 5+ countries, Simform primarily serves North America, the UK, and the Northern European market. Simform takes pride in being one of the most reputed employers in the region, having created a thriving work culture with a high work-life balance that gives a sense of freedom and opportunity to grow. Job Overview: We are seeking a skilled and detail-oriented Senior DevOps Engineer with a strong background in Azure cloud infrastructure. This role is ideal for professionals who are passionate about automation, scalability, and ensuring best practices in CI/CD and cloud deployments. You will be responsible for managing and modernizing medium-sized, multi-tier environments, building and maintaining CI/CD pipelines, and supporting application and infrastructure reliability and security. Key Responsibilities: Design, implement, and maintain CI/CD pipelines for efficient and secure application deployment. Lead infrastructure development and operational support in Azure, ensuring high availability and performance. Work with modern containerization technologies like Docker, Docker Compose, and orchestrators like Kubernetes. Implement and follow Git best practices across teams. Use Infrastructure as Code (IaC) tools, preferably Terraform, to provision cloud resources. Plan and enforce DevSecOps processes and ensure security compliance throughout the software development lifecycle. Develop and monitor application and infrastructure performance using tools like Azure Monitor, Managed Grafana, and other observability tools. Drive multi-tenancy SaaS architecture implementation and deployment best practices. Collaborate with development teams to ensure alignment with DevOps best practices. Troubleshoot issues across development, testing, and production environments. Provide leadership in infrastructure planning, design reviews, and incident management. Required Skills & Experience: 4+ years of relevant hands-on DevOps experience. Strong communication and interpersonal skills. Solid foundational knowledge in DevOps methodologies and tools. Expertise in Azure cloud services, with a working knowledge of AWS also preferred. Proficiency in Kubernetes, including deploying, scaling, and maintaining clusters. Experience with web servers like Nginx and Apache. Familiarity with the Well-Architected Framework (Azure). Practical experience with monitoring and observability tools in Azure. Working knowledge of DevSecOps tools and security best practices. Proven debugging and troubleshooting skills across infrastructure and applications. Experience supporting multi-tenant SaaS platforms is a plus. Experience in application performance monitoring and tuning. Experience with Azure dashboarding, logging, and monitoring tools, such as Managed Grafana, is a strong advantage. Preferred Qualifications: Azure certifications (e.g., AZ-400, AZ-104) Experience in cloud migration and application modernization Familiarity with tools like Prometheus, Grafana, ELK stack, or similar Leadership experience or mentoring junior engineers Why Join Us: Young Team, Thriving Culture Flat-hierarchical, friendly, engineering-oriented, and growth-focused culture. Well-balanced learning and growth opportunities Free health insurance. Office facilities with a game zone, in-office kitchen with affordable lunch service, and free snacks. Sponsorship for certifications/events and library service. Flexible work timing, leaves for life events, WFH and hybrid options Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

India

On-site

Linkedin logo

Job Description: We are seeking a skilled and experienced Azure Data Engineer to join our data engineering team. The ideal candidate will have a strong background in building and optimizing data pipelines and data sets, utilizing Azure Data Factory, Databricks, PySpark, and SQL. You will work closely with data architects, data scientists, and business stakeholders to design and implement scalable, reliable, and high-performance data solutions on the Azure platform. Required Skills and Qualifications: Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field. 7+ years of experience as a Data Engineer or in a similar role. Strong experience with Azure Data Factory for ETL/ELT operations. Proficiency in Databricks and PySpark for big data processing and transformation. Advanced SQL skills for data manipulation and reporting. Hands-on experience with data modeling, ETL development, and data warehousing. Experience with Azure services like Azure Synapse, Azure Blob Storage, and Azure SQL Database. Understanding of data governance principles and best practices. Strong analytical and problem-solving skills . Familiarity with Python or other scripting languages. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job Family Data Science & Analysis (India) Travel Required None Clearance Required None What You Will Do Design, develop, and maintain robust, scalable, and efficient data pipelines and ETL/ELT processes. Lead and execute data engineering projects from inception to completion, ensuring timely delivery and high quality. Build and optimize data architectures for operational and analytical purposes. Collaborate with cross-functional teams to gather and define data requirements. Implement data quality, data governance, and data security practices. Manage and optimize cloud-based data platforms ( Azure\AWS). Develop and maintain Python/PySpark libraries for data ingestion, Processing and integration with both internal and external data sources. Design and optimize scalable data pipelines using Azure data factory and Spark(Databricks) Work with stakeholders, including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Develop frameworks for data ingestion, transformation, and validation. Mentor junior data engineers and guide best practices in data engineering. Evaluate and integrate new technologies and tools to improve data infrastructure. Ensure compliance with data privacy regulations (HIPAA, etc.). Monitor performance and troubleshoot issues across the data ecosystem. Automated deployment of data pipelines using GIT hub actions \ Azure devops What You Will Need Bachelors or master’s degree in computer science, Information Systems, Statistics, Math, Engineering, or related discipline. Minimum 5 + years of solid hands-on experience in data engineering and cloud services. Extensive working experience with advanced SQL and deep understanding of SQL. Good Experience in Azure data factory (ADF), Databricks , Python and PySpark. Good experience in modern data storage concepts data lake, lake house. Experience in other cloud services (AWS) and data processing technologies will be added advantage. Ability to enhance , develop and resolve defects in ETL process using cloud services. Experience handling large volumes (multiple terabytes) of incoming data from clients and 3rd party sources in various formats such as text, csv, EDI X12 files and access database. Experience with software development methodologies (Agile, Waterfall) and version control tools Highly motivated, strong problem solver, self-starter, and fast learner with demonstrated analytic and quantitative skills. Good communication skill. What Would Be Nice To Have AWS ETL Platform – Glue , S3 One or more programming languages such as Java, .Net Experience in US health care domain and insurance claim processing. What We Offer Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. About Guidehouse Guidehouse is an Equal Opportunity Employer–Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse’s Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant’s dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Skills: sql,pl/sql,spark,star and snowflake dimensional modeling,databricks,snowsight,terraform,git,unix shell scripting,snowsql,cassandra,circleci,azure,pyspark,snowpipe,mongodb,neo4j,azure data factory,snowflake,python Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Skills: sql,pl/sql,spark,star and snowflake dimensional modeling,databricks,snowsight,terraform,git,unix shell scripting,snowsql,cassandra,circleci,azure,pyspark,snowpipe,mongodb,neo4j,azure data factory,snowflake,python Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Role: Consultant - Generative AI Location: Gurgaon We are seeking a highly skilled Generative AI Engineer to join the team. The Generative AI Engineer will play a pivotal role in designing, coding, and deploying advanced AI solutions using state-of-the-art technologies such as Databricks, AI Fabric, Azure, and Snowflake. This role requires a deep understanding of AI/ML frameworks and cloud-based environments, focusing on building scalable, high-performance AI solutions that drive value for global network. Key responsibilities include: 1. AI Solution Development: • Design, develop, and deploy Generative AI models and solutions that address complex business challenges across advisory, tax, and audit services. • Leverage platforms such as Databricks for data engineering and AI model development, AI Fabric for orchestration and deployment, and Snowflake for scalable data management. •Utilize Azure cloud services to implement and scale AI solutions, ensuring high availability, performance, and security. 2. Technical Leadership and Collaboration : • Collaborate with data scientists, AI architects, and software engineers to define technical requirements and develop end-to-end AI solutions. • Lead the development of AI models from experimentation and prototyping through to production, ensuring alignment with business objectives. • Work closely with cross-functional teams to integrate AI solutions into existing workflows and systems, optimizing for efficiency and usability. 3. Coding and Implementation: • Write high-quality, maintainable code using Python, Scala, or similar programming languages, focusing on AI/ML libraries and frameworks. • Develop and optimize data pipelines using Databricks, ensuring seamless data flow from ingestion to AI model training and inference. • Implement AI solutions using AI Fabric, focusing on model orchestration, deployment, and monitoring within a cloud environment. 4. Data Management and Integration: • Design and manage data architectures using Snowflake, ensuring data is organized, accessible, and secure for AI model training and deployment. • Integrate data from various sources, transforming and preparing it for AI model development, ensuring data quality and integrity. • Work with large datasets, applying best practices for data engineering, ETL processes, and real-time data processing 5. Cloud & Infrastructure Management: • Deploy AI models and services in Azure, utilizing cloud-native tools and best practices to ensure scalability, reliability, and security. • Implement CI/CD pipelines to automate the deployment and management of AI models, ensuring rapid iteration and continuous delivery. • Optimize infrastructure for AI workloads, balancing performance, cost, and resource utilization. 6. Performance Tuning and Optimization: • Continuously monitor and optimize AI models and data pipelines to improve performance, accuracy, and scalability. • Implement strategies for model fine-tuning, hyperparameter optimization, and feature engineering to enhance AI solution effectiveness. • Troubleshoot and resolve technical issues related to AI model deployment, data processing, and cloud infrastructure. 7. Innovation and Continuous Improvement: • Stay updated with the latest advancements in Gen AI, cloud computing, and big data technologies, applying new techniques to improve solutions. • Experiment with emerging technologies and frameworks to drive innovation. • Contribute to the development of AI best practices, coding standards, and technical documentation to ensure consistency and quality across projects Experience Required: 2+ years of experience in AI, Machine Learning, or related fields, with hands-on experience in developing and deploying AI solutions. • Proven experience with AI frameworks such as TensorFlow, PyTorch, and experience in working with Databricks, AI Fabric, and Snowflake. • Extensive experience with Azure cloud services, including AI and data services, and a strong background in cloud-native development. • Expertise in coding with Python, Scala, or similar languages, with a focus on AI/ML libraries and big data processing. Proficiency in designing and coding AI models, data pipelines, and cloud-based solutions. • Strong understanding of AI/ML algorithms, data engineering, and model deployment strategies. • Experience with cloud infrastructure management, particularly in Azure, and the ability to optimize AI workloads for performance and cost. • Excellent problem-solving skills and the ability to work collaboratively in a cross-functional team environment. • Strong communication skills, with the ability to articulate complex technical concepts to technical and non-technical stakeholders. Please share your resume at shikha@tdnewton.com Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role: Data Engineer with Snowflake and DBT Experience: 5 to 15 Years Job Description: 5+ years of experience relevant in data engineering or backend development roles. Expertise in Microsoft Azure , including Azure Data Factory , Azure Data Lake , and Azure Databricks . Hands-on experience with Snowflake & DBT – including Snowflake SQL, data modeling, performance tuning. Strong proficiency in Python /Spark for data processing and scripting. Strong understanding of ETL/ELT processes, data integration patterns, and best practices. Familiarity with version control (e.g., Git) and Agile development methodologies. Experience with CI/CD tools like Azure DevOps is a plus. Familiarity with data catalog and data governance tools like Purview . Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Gameskraft - Established in 2017, Gameskraft has become one of India’s fastest-growing companies. We are building the world's most-loved online gaming ecosystem - one game at a time. Started by a group of passionate gamers, we have grown from a small team of five members to a large family of 600+ Krafters, working out of our office in Prestige Tech Park, Bangalore. Our short-term success lies in the fact that we strive to focus on building a safe, secure,and responsible gaming environment for everyone. Our vision is to create unmatched experiences every day, everywhere. We set the highest benchmarks in the industry in terms of design, technology, and intuitiveness. We are also the industry’s only ISO 27001and ISO 9001 certified gaming company. About the role - We are hiring a Senior Data Engineer at Gameskraft, one of India's fastest-growing gaming companies, to build and scale a robust data platform. The role involves designing and optimizing data pipelines, developing scalable infrastructure, and ensuring seamless data accessibility for business insights. Key Responsibilities: Building and optimizing big data pipelines, architectures, and datasets to handle large-scale data. Enhancing infrastructure for scalability, automation, and data delivery improvements. Developing real-time and batch processing solutions using Kafka, Spark, and Airflow. Ensuring data governance, security compliance, and high availability. Collaborating with product, business, and analytics teams to support data needs. Tech Stack: Big Data Tools: Spark, Kafka, Databricks (Delta Tables), ScyllaDB, Redshift Data Pipelines & Workflow: Airflow, EMR, Glue, Athena Programming: Java, Scala, Python Cloud & Storage: AWS Databases: SQL, NoSQL (ScyllaDB, OpenSearch) Backend: Spring Boot What we expect you will bring to the table: 1. Cutting-Edge Technology & Scale At Gameskraft, you will be working on some of the most advanced big data technologies, including Databricks Delta Tables, ScyllaDB, Spark, Kafka, Airflow, and Spring Boot. Our systems handle billions of data points daily, ensuring real-time analytics and high-scale performance. If you’re passionate about big data, real-time streaming, and cloud computing, this role offers the perfect challenge. 2. Ownership & Impact Unlike rigid corporate structures, Gameskraft gives engineers complete freedom and ownership to design, build, and optimize large-scale data pipelines. Your work directly impacts business decisions, game fairness, and player experience, ensuring data is actionable and insightful. 3. High-Growth, Fast-Paced Environment We are one of India’s fastest-growing gaming companies, scaling rapidly since 2017. You will be part of a dynamic team that moves fast, innovates continuously, and disrupts the industry with cutting-edge solutions. 4. Strong Engineering Culture We value technical excellence, continuous learning, and deep problem-solving. We encourage engineers to experiment, contribute, and grow, making this an ideal place for those who love tackling complex data engineering challenges. Why Join Gameskraft? Work on high-scale, real-time data processing challenges. Own end-to-end design and implementation of data pipelines. Collaborate with top-tier engineers and data scientists. Enjoy a fast-growing and financially stable company. Freedom to innovate and contribute at all levels. Work Culture A true startup culture - young, fast paced, where you are driven by personal ownership of solving challenges that help you grow fast Focus on innovation, data orientation, being results driven, taking on big goals, and adapting fast A high performance, meritocratic environment, where we share ideas, debate and grow together with each new product Massive and direct impact on the work you do. Growth through solving dynamic challenges Leveraging technology & analytics to solve large scale challenges Working with cross functional teams to create great product and take them to market Rub shoulders with some of the brightest & most passionate people in the gaming & consumer internet industry Compensation & Benefits Attractive compensation and ESOP packages INR 5 Lakh medical insurance cover for yourself and your family Fair & transparent performance appraisals An attractive Car Lease policy Relocation benefits A vibrant office space with fully stocked pantries. And your lunch is on us! Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Title : Data Engineer Location – Bangalore Mandatory Skills - AWS, Databricks at least 4+ yrs exp in both Experience - 6-9 years Notice Period - immediate to 30 days Show more Show less

Posted 1 week ago

Apply

4.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description Description This is a full-time position that requires strong experience and knowledge on Windows Servers, Azure technologies such as Entra ID, Storage accounts, App services etc with understanding of other clouds like AWS and GCP. T he Senior Systems Administrator will be responsible for configuring, deploying, monitoring and managing in Trinity’s Infrastructure hosted on Azure, AWS and GCP. Additionally, the Senior Systems Administrator will be responsible for ensuring the proper backup are in place of Infrastructure regardless of the platform. This position also includes the responsibility of vulnerability and patch management. The Senior Systems Administrator will be working directly with other departments of organization when and where needed. Technical Skills : Strong knowledge of Microsoft Windows operating systems. Strong experience with Azure cloud technologies such as (but not limited to) Azure Entra ID, App service, Databricks, Storage Account, Virtual Machine, Azure recovery services, networking components. Strong knowledge of Enterprise applications and SSO configurations. Knowledge of security tools management such as Zscaler, Sentinel One or any other. Strong experience in backup administration using tools such as Veeam Backup. Strong experience in monitoring tools such as SolarWinds, Logic Monitoring etc. Strong understanding of Networking protocols such as DNS, DHCP, TCP/IP, HTTP/HTTPS and SFTP. Strong knowledge of Patch management and vulnerability remediation. Should have knowledge of Automations using PowerShell scripting. Should be good in maintaining SOPs and documentation. Should be well versed with ITIL process and ticketing tool such as Service-Now. Technical Skills (Good to Have): Experience in Linux operating systems troubleshooting. Experience with SOC or other common IT regulatory standards. Experience with AWS/GCP cloud services. Experience in Office365 Administration and support Experience on GitHub Experience on Azure DevOps. Basic understanding of network management on Cisco switches and firewalls. Qualifications Education: B.E/BTech in Computer Science or related field Work Experience: 4 to 7 years of hands-on experience on system administration, network management, and security protocols on Azure cloud. The ideal candidate will have a proven track record of managing complex systems and ensuring their reliability, security, and performance. Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 16 Lacs

Bangalore Rural, Bengaluru

Work from Office

Naukri logo

Experience in designing, building, and managing data solutions on Azure. Design, develop, and optimize big data pipelines and architectures on Azure. Implement ETL/ELT processes using Azure Data Factory, Databricks, and Spark. Required Candidate profile 5yrs of exp in data engineering and big data technologies. Hands-on experience with Azure services (Azure Data Factory, Azure Synapse, Azure SQL, ADLS, etc.). Databricks Certification (Mandatory).

Posted 1 week ago

Apply

Exploring Databricks Jobs in India

Databricks is a popular technology in the field of big data and analytics, and the job market for Databricks professionals in India is growing rapidly. Companies across various industries are actively looking for skilled individuals with expertise in Databricks to help them harness the power of data. If you are considering a career in Databricks, here is a detailed guide to help you navigate the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for Databricks professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum

Career Path

In the field of Databricks, a typical career path may include: - Junior Developer - Senior Developer - Tech Lead - Architect

Related Skills

In addition to Databricks expertise, other skills that are often expected or helpful alongside Databricks include: - Apache Spark - Python/Scala programming - Data modeling - SQL - Data visualization tools

Interview Questions

  • What is Databricks and how is it different from Apache Spark? (basic)
  • Explain the concept of lazy evaluation in Databricks. (medium)
  • How do you optimize performance in Databricks? (advanced)
  • What are the different cluster modes in Databricks? (basic)
  • How do you handle data skewness in Databricks? (medium)
  • Explain how you can schedule jobs in Databricks. (medium)
  • What is the significance of Delta Lake in Databricks? (advanced)
  • How do you handle schema evolution in Databricks? (medium)
  • What are the different file formats supported by Databricks for reading and writing data? (basic)
  • Explain the concept of checkpointing in Databricks. (medium)
  • How do you troubleshoot performance issues in Databricks? (advanced)
  • What are the key components of Databricks Runtime? (basic)
  • How can you secure your data in Databricks? (medium)
  • Explain the role of MLflow in Databricks. (advanced)
  • How do you handle streaming data in Databricks? (medium)
  • What is the difference between Databricks Community Edition and Databricks Workspace? (basic)
  • How do you set up monitoring and alerting in Databricks? (medium)
  • Explain the concept of Delta caching in Databricks. (advanced)
  • How do you handle schema enforcement in Databricks? (medium)
  • What are the common challenges faced in Databricks projects and how do you overcome them? (advanced)
  • How do you perform ETL operations in Databricks? (medium)
  • Explain the concept of MLflow Tracking in Databricks. (advanced)
  • How do you handle data lineage in Databricks? (medium)
  • What are the best practices for data governance in Databricks? (advanced)

Closing Remark

As you prepare for Databricks job interviews, make sure to brush up on your technical skills, stay updated with the latest trends in the field, and showcase your problem-solving abilities. With the right preparation and confidence, you can land your dream job in the exciting world of Databricks in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies