Jobs
Interviews

1519 Data Bricks Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Data Engineer at our Pune location, you will play a critical role in designing, developing, and maintaining scalable data pipelines and architectures using Data bricks on Azure/AWS cloud platforms. With 6 to 9 years of experience in the field, you will collaborate with stakeholders to integrate large datasets, optimize performance, implement ETL/ELT processes, ensure data governance, and work closely with cross-functional teams to deliver accurate solutions. Your responsibilities will include building, maintaining, and optimizing data workflows, integrating datasets from various sources, tuning pipelines for performance and scalability, implementing ETL/ELT processes using Spark and Data bricks, ensuring data governance, collaborating with different teams, documenting data pipelines, and developing automated processes for continuous integration and deployment of data solutions. To excel in this role, you should have 6 to 9 years of hands-on experience as a Data Engineer, expertise in Apache Spark, Delta Lake, Azure/AWS Data bricks, proficiency in Python, Scala, or Java, advanced SQL skills, experience with cloud data platforms, data warehousing solutions, data modeling, ETL tools, version control systems, and automation tools. Additionally, soft skills such as problem-solving, attention to detail, and ability to work in a fast-paced environment are essential. Nice to have skills include experience with Data bricks SQL and Data bricks Delta, knowledge of machine learning concepts, and experience in CI/CD pipelines for data engineering solutions. Joining our team offers challenging work with international clients, growth opportunities, a collaborative culture, and global project involvement. We provide competitive salaries, flexible work schedules, health insurance, performance-based bonuses, and other standard benefits. If you are passionate about data engineering, possess the required skills and qualifications, and thrive in a dynamic and innovative environment, we welcome you to apply for this exciting opportunity.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Pricing Revenue Growth Consultant, your primary role will be to advise on building a pricing and promotion tool for a Consumer Packaged Goods (CPG) client. This tool will encompass pricing strategies, trade promotions, and revenue growth initiatives. You will be responsible for developing analytics and machine learning models to analyze price elasticity, promotion effectiveness, and trade promotion optimization. Collaboration with CPG business, marketing, data scientists, and other teams will be essential for the successful delivery of the project and tool. Your Business Domain Skills will be crucial in this role, including expertise in Trade Promotion Management (TPM), Trade Promotion Optimization (TPO), Promotion Depth Frequency Forecasting, Price Pack Architecture, Competitive Price Tracking, Revenue Growth Management, and Financial Modeling. Additionally, you will need proficiency in AI, Machine Learning for Pricing, and Dynamic pricing implementation. Key Responsibilities: - Utilize Consulting Skills for hypothesis-driven problem solving, Go-to-Market pricing, and revenue growth execution. - Conduct Advisory Presentations and Data Storytelling. - Provide Project Leadership and Execution. In terms of Technical Requirements, you should possess: - Proficiency in programming languages such as Python and R for data manipulation and analysis. - Expertise in machine learning algorithms and statistical modeling techniques. - Familiarity with data warehousing, data pipelines, and data visualization tools like Tableau or Power BI. - Experience in Cloud platforms like ADF, Databricks, Azure, and their AI services. Your Additional Responsibilities will include: - Working collaboratively with cross-functional teams across sales, marketing, and product development. - Managing stakeholders and leading teams. - Thriving in a fast-paced environment focused on delivering timely insights to support business decisions. - Demonstrating excellent problem-solving skills and the ability to address complex technical challenges. - Communicating effectively with cross-functional teams and stakeholders. - Managing multiple projects simultaneously and prioritizing tasks based on business impact. Qualifications: - A degree in Data Science or Computer Science with a specialization in data science. - A Master's in Business Administration and Analytics is preferred. Preferred Skills: - Experience in Technology, Big Data, and Text Analytics.,

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Infrastructure Architect at our organization, you will play a crucial role in our digital transformation journey. You will have the opportunity to be involved in cybersecurity, architecture, and data protection for our global organization. Collaborating with our team, you will provide expertise in designing and validating systems, infrastructure, technologies, and data protection. Your responsibilities will include participating in technical and business discussions to shape future architecture direction, analyzing data to develop architectural requirements, and contributing to infrastructure architecture governance. Additionally, you will be involved in designing and deploying infrastructure solutions that meet standardization, security, compliance, and quality requirements for various businesses. Your role will also entail researching emerging technologies and trends to support project development and operational activities, as well as coaching and mentoring team members. To excel in this role, you should hold a Bachelor's Degree with a minimum of 8 years of professional experience. You should possess experience in Azure infrastructure services, automating deployments, working in DevOps, and utilizing Databricks. Proficiency in database technologies, ETL tools, SQL queries optimization, and computing/network/storage design is essential. Furthermore, you should demonstrate an understanding of technical and business discussions, architecture standards, and requirements gathering. At our organization, we value diversity and recognize that individuals have unique working preferences. Therefore, we offer flexible working patterns, including remote work options and adaptable schedules to accommodate personal commitments. We believe in investing in our employees" well-being, fostering development, and cultivating leadership at all levels to create a supportive and inclusive work environment. Join our team at Baker Hughes, an energy technology company dedicated to innovating solutions for energy and industrial clients worldwide. With a legacy of over a century and a presence in more than 120 countries, we are committed to advancing energy technologies for a safer, cleaner, and more efficient future. If you are passionate about driving innovation and progress, we invite you to be part of our team and contribute to shaping the future of energy.,

Posted 1 day ago

Apply

13.0 - 16.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Evernorth Evernorth? exists to elevate health for all, because we believe health is the starting point for human potential and progress. As champions for affordable, predictable and simple health care, we solve the problems others dont, wont or cant. Our innovation hub in India will allow us to work with the right talent, expand our global footprint, improve our competitive stance, and better deliver on our promises to stakeholders. We are passionate about making healthcare better by delivering world-class solutions that make a real difference. We are always looking upward. And that starts with finding the right talent to help us get there. Software Engineering Senior Manager Position Summary In this role, the Software Engineering Senior Manager will be responsible for building and leading a highly talented team that is focused on using technology, advanced analytics, embedded insights, and product design principles to innovate and deliver modern solutions aligned to strategic initiatives. In this role you will collaborate with business leadership and technology partners to define and execute on a shared vision. You authentically engage with your team and matrix partners to ensure ongoing alignment and delivery success. You are dedicated to technical excellence for yourself, your team, and the software you deliver. The ideal candidate should have 13 - 16 years of experience in building and leading high performing teams in software engineering, preferably in health care or a related industry. Job Description & Responsibilities Grow our engineering team Invest time in developing team members and mentoring; attract, hire, and retain top talent. Provide leadership and management of teams responsible for software development and the introduction of new technologies at offshore. Partner with the business units, customers and stakeholders. Experience leading development using modern software engineering and product development tools including Agile/SAFE, Continuous Integration, Continuous Delivery, etc. Demonstrate leadership in the context of software engineering and be an evangelist for engineering best practices. Stay abreast of leading-edge technologies in the industry. Evaluate emerging software technologies. Work collaboratively with all business areas to assess unmet/new business needs and solutions. Encourage the growth of direct and indirect reports through skills development, objectives, and goal settings. Hold direct reports accountable for meeting performance standards and departmental goals. Mentor staff, measure staff performance and complete regular performance reviews and ranking. Experience Desired 13 16 years technology experience, with direct experience designing and implementing high volume multi-tier transactional systems, including: web, large-scale database, workflow, enterprise-scale software, and service oriented and cloud-based architectures. Proven experience leading/managing technical teams with a passion for the developing the talent within the team. Experience with vendor management in an onshore/offshore model. Demonstrated success with delivering software using modern, cloud-based technologies preferably on AWS. Strong experience with most of the following technologies Qlik/Kafka, AWS (serverless related services), python, sql stored procedures, data bricks, java, Debezium. Expertise across Relational and NoSQL database platforms including MS SQL, Postgres, DynamoDB and Redshift Strong grasp of cloud-native architectures, APIs, microservices, and modern DevOps practices (CI/CD, IaC, monitoring). Experience with agile methodology including SCRUM team leadership. Experience with modern delivery practices such as continuous integration, behavior/test driven development, and specification by example. Proven experience with architecture, design, and development of large-scale enterprise application solutions. Strong written and verbal communication skills with the ability to interact with all levels of the organization. Strong influencing/negotiation, interpersonal/relationship management skills. Strong time and project management skills. Proven ability to resolve difficult and politically spirited issues and mitigate risks that have the potential of undermining the delivery of critical initiatives. Demonstrated leadership in building high performing teams including the hiring and developing of great people. Proven and accomplished individual with excellent leadership and strategic management skills Primary Skills / Education Degree in Computer Science, Information Technology, Artificial Intelligence, or a related field. Strong analytical skills to challenge design recommendations. Strong understanding of best practices related to interface and interoperability. Strong understanding of SDLC in an Agile environment. Additional Skills Proven track record of delivering software engineering initiatives, IT-application initiatives, and cross IT/business initiatives Demonstrated success with delivering products using modern software techniques Applied expertise in software engineering methodologies including automation and CI/CD About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives. Show more Show less

Posted 1 day ago

Apply

8.0 - 13.0 years

13 - 17 Lacs

Noida, Pune, Bengaluru

Work from Office

Position Summary We are looking for a highly skilled and experienced Data Engineering Manager to lead our data engineering team. The ideal candidate will possess a strong technical background, strong project management abilities, and excellent client handling/stakeholder management skills. This role requires a strategic thinker who can drive the design, development and implementation of data solutions that meet our clients needs while ensuring the highest standards of quality and efficiency. Job Responsibilities Technology Leadership- Lead guide the team independently or with little support to design, implement deliver complex cloud-based data engineering / data warehousing project assignments Solution Architecture & Review- Expertise in conceptualizing solution architecture and low-level design in a range of data engineering (Matillion, Informatica, Talend, Python, dbt, Airflow, Apache Spark, Databricks, Redshift) and cloud hosting (AWS, Azure) technologies Managing projects in fast paced agile ecosystem and ensuring quality deliverables within stringent timelines Responsible for Risk Management, maintaining the Risk documentation and mitigations plan. Drive continuous improvement in a Lean/Agile environment, implementing DevOps delivery approaches encompassing CI/CD, build automation and deployments. Communication & Logical Thinking- Demonstrates strong analytical skills, employing a systematic and logical approach to data analysis, problem-solving, and situational assessment. Capable of effectively presenting and defending team viewpoints, while securing buy-in from both technical and client stakeholders. Handle Client Relationship- Manage client relationship and client expectations independently. Should be able to deliver results back to the Client independently. Should have excellent communication skills. Education BE/B.Tech Master of Computer Application Work Experience Should have expertise and 8+ years of working experience in at least twoETL toolsamong Matillion, dbt, pyspark, Informatica, and Talend Should have expertise and working experience in at least twodatabases among Databricks, Redshift, Snowflake, SQL Server, Oracle Should have strong Data Warehousing, Data Integration and Data Modeling fundamentals like Star Schema, Snowflake Schema, Dimension Tables and Fact Tables. Strong experience on SQL building blocks. Creating complex SQL queries and Procedures. Experience in AWS or Azure cloud and its service offerings Aware oftechniques such asData Modelling, Performance tuning and regression testing Willingness to learn and take ownership of tasks. Excellent written/verbal communication and problem-solving skills and Understanding and working experience on Pharma commercial data sets like IQVIA, Veeva, Symphony, Liquid Hub, Cegedim etc. would be an advantage Hands-on in scrum methodology (Sprint planning, execution and retrospection) Behavioural Competencies Teamwork & Leadership Motivation to Learn and Grow Ownership Cultural Fit Talent Management Technical Competencies Problem Solving Lifescience Knowledge Communication Designing technical architecture Agile PySpark AWS Data Pipeline Data Modelling Matillion Databricks Location - Noida,Bengaluru,Pune,Hyderabad,India

Posted 2 days ago

Apply

1.0 - 4.0 years

5 - 8 Lacs

Mumbai

Work from Office

We are hiring a Data Engineer to design and manage data pipelines from factory floors to the Azure cloud, supporting our central data lakehouse architecture You'll work closely with OT engineers, architects, and AI teams to move data from edge devices into curated layers (Bronze ? Silver ? Gold), ensuring high data quality, security, and performance Your work will directly enable advanced analytics and AI in production and operations, Key Job functions Build data ingestion and transformation pipelines using Azure Data Factory, IoT Hub, and Databricks 2) Integrate OT sensor data using protocols like OPC-UA and MQTT Design Medallion architecture flows with Delta Lake and Synapse Monitor and optimize data performance and reliabilityImplement data quality, observability, and lineage practices ( e-g , with Purview or Unity Catalog) Collaborate with OT and IT teams to ensure contextualized, usable data Show

Posted 2 days ago

Apply

1.0 - 3.0 years

9 - 13 Lacs

Pune

Work from Office

Your Team Responsibilities We are hiring an Associate Data Engineer to support our core data pipeline development efforts and gain hands-on experience with industry-grade tools like PySpark, Databricks, and cloud-based data warehouses The ideal candidate is curious, detail-oriented, and eager to learn from senior engineers while contributing to the development and operationalization of critical data workflows, Your Key Responsibilities Assist in the development and maintenance of ETL/ELT pipelines using PySpark and Databricks under senior guidance, Support data ingestion, validation, and transformation tasks across Rating Modernization and Regulatory programs, Collaborate with team members to gather requirements and document technical solutions, Perform unit testing, data quality checks, and process monitoring activities, Contribute to the creation of stored procedures, functions, and views, Support troubleshooting of pipeline errors and validation issues, Your Skills And Experience That Will Help You Excel Bachelors degree in Computer Science, Engineering, or related discipline, 3+ years of experience in data engineering or internships in data/analytics teams, Working knowledge of Python, SQL, and ideally PySpark, Understanding of cloud data platforms (Databricks, BigQuery, Azure/GCP), Strong problem-solving skills and eagerness to learn distributed data processing, Good verbal and written communication skills, About MSCI What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing, Flexible working arrangements, advanced technology, and collaborative workspaces, A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results, A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients, Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development, Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles, We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Womens Leadership Forum, At MSCI we are passionate about what we do, and we are inspired by our purpose to power better investment decisions Youll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry, MSCI is a leading provider of critical decision support tools and services for the global investment community With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process, MSCI Inc is an equal opportunity employer It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability Assistance@msci and indicate the specifics of the assistance needed Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries, To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes Please do not forward CVs/Resumes to any MSCI employee, location, or website MSCI is not responsible for any fees related to unsolicited CVs/Resumes, Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers Read our full note on careers msci Show

Posted 2 days ago

Apply

6.0 - 10.0 years

20 - 35 Lacs

Bengaluru

Remote

Microsoft Dynamics CRM Developer, 6 to 10 years of relevant experience, Remote. Pan India. Work from home, Mandatory Skills: Microsoft Dynamics CRM, Azure Data Factory, Azure Data Brick Immediate to 30 days Joiners. ------------------------------

Posted 2 days ago

Apply

7.0 - 9.0 years

5 - 5 Lacs

Thiruvananthapuram

Work from Office

1. Production monitoring and troubleshooting in on Prem ETL and AWS environment 2. Working experience using ETL Datastage along with DB2 3. Awareness to use tools such as Dynatrace, Appdynamics, Postman , AWS CICD 4. Software code development experience in ETL batch processing and AWS cloud 5. Software code management, repository updates and reuse 6. Implementation and/or configuration, management, and maintenance of software 7. Implementation and configuration of SaaS and public, private and hybrid cloud-based PaaS solutions 8. Integration of SaaS and PaaS solutions with Data Warehouse Application Systems including SaaS and PaaS upgrade management 9. Configuration, Maintenance and support for entire DWA Application Systems landscape including but not limited to supporting DWA Application Systems components and tasks required to deliver business processes and functionally (e.g., logical layers of databases, data marts, logical and physical data warehouses, middleware, interfaces, shell scripts, massive data transfer and uploads, web development, mobile app development, web services and APIs) 10. DWA Application Systems support for day-to-day changes and business continuity and for addressing key business, regulatory, legal or fiscal requirements 11. Support for all Third-party specialized DWA Application Systems 12. DWA Application Systems configuration and collaboration with infrastructure service supplier required to provide application access to external/third parties 13. Integration with internal and external systems (e.g., direct application interfaces, logical middleware configuration and application program interface (API) use and development) 14. Collaboration with third party suppliers such as infrastructure service supplier and enterprise public cloud providers 15. Documentation and end user training of new functionality 16. All activities required to support business process application functionality and to deliver the required application and business functions to End Users in an integrated service delivery model across the DWA Application Development lifecycle (e.g., plan, deliver, run) . Maintain data quality and run batch schedules , Operations and Maintenance 17. Deploy code to all the environments (Prod, UAT, Performance, SIT etc.) 18. Address all open tickets within the SLA CDK (Typescript) CFT (YAML) Nice to have GitHub Scripting -Bash/SH Security minded/best practices known Python Databricks & Snowflake Required Skills Databricks,Datastage,CloudOps,production support

Posted 2 days ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

Datawarehouse Database Architect - Immediate hiring. We are currently looking for Datawarehouse Database Architect for our client who are into Fintech solutions. Please let us know your interest and availability Experience: 10 plus years of experience Locations: Hybrid Any Accion offices in India pref (Bangalore /Pune/Mumbai) Notice Period: Immediate – 0 – 15 days joiners are preferred Required skills: Tools & Technologies Cloud Platform : Azure (Data Bricks, DevOps, Data factory, azure synapse Analytics, Azure SQL, blob storage, Databricks Delta Lake) Languages : Python/PL/SQL/SQL/C/C++/Java Databases : Snowflake/ MS SQL Server/Oracle Design Tools : Erwin & MS Visio. Data warehouse tools : SSIS, SSRS, SSAS. Power Bi, DBT, Talend Stitch, PowerApps, Informatica 9, Cognos 8, OBIEE. Any cloud exp is good to have Let’s connect for more details. Please write to me at mary.priscilina@accionlabs.com along with your cv and with the best contact details to get connected for a quick discussion. Regards, Mary Priscilina

Posted 2 days ago

Apply

3.0 - 8.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Primary Responsibilities: Be a team player in an agile team within a release team / value stream Develop and automate business solutions by creating new and modifying existing software applications Be technically hands-on and excellent in Design, Coding and Testing End to End, product quality Participate and contribute to Sprint Ceremonies Promote and develop the culture of collaboration, accountability, and quality Provide technical support to the team and help the team in resolving technical issues Closely work with Tech Lead, Onshore partners, deployment, and infrastructure teams Basic, structured, standard approach to work Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Graduate degree or equivalent experience 3+ years of experience working in Data warehousing and Data Mart Platforms 3+ years of working experience in warehousing ecosystemDesign & Development, scheduling jobs using Airflow, running, and monitoring refreshes 3+ years of working experience in Big Data Technologies around Spark Or PySpark and Databricks 3+ years of working experience in Agile team 2+ years of working experience in cloud and Dev Ops technologies preferably on AzureDocker/ Kubernetes/Terraform/Chef Working experience in CI/CD pipeline (test, build, deployment and monitoring automation) Knowledge of software configuration management and packaging Demonstrates excellent problem-solving skills Preferred Qualification: 3+ years of working experience in ELT/ETL Design & Development and solid experience in SQL on Teradata and Snowflake At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyoneof every race, gender, sexuality, age, location and incomedeserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes an enterprise priority reflected in our mission. #Nic

Posted 2 days ago

Apply

3.0 - 7.0 years

7 - 11 Lacs

Bengaluru

Work from Office

As a key member of our Data Science team, you will be responsible for developing innovative AI ML solutions across diverse business domains This includes designing, implementing, and optimizing advanced analytics models to address complex business challenges and drive data-driven decision making Your core responsibility will be to extract actionable insights from large datasets, develop predictive algorithms, and create robust machine learning pipelines You will collaborate closely with cross-functional teams including business analysts, software engineers, and product managers to understand business requirements, define problem statements, and deliver scalable solutions Additionally, you'll be expected to stay current with emerging technologies and methodologies in the AI/ML landscape to ensure our technical approaches remain cutting-edge and effective Desired Skills and experience Demonstrated expertise in applying advanced statistical modeling, machine learning algorithms, and deep learning techniques. Proficiency in programming languages such as Python data analysis and model development. Proficiency in cloud platforms, such as Azure, Azure Data Factory, Snowflake, Databricks. Experience with data manipulation, cleaning, and preprocessing using pandas, NumPy, or equivalent libraries. Strong knowledge of SQL and experience working with various database systems and big data technologies. Proven track record of developing and deploying machine learning models in production environments. Experience with version control systems (e.g., Git) and collaborative development practices. Proficiency with visualization tools and libraries such as Matplotlib, Seaborn, Tableau, or PowerBI. Strong mathematics background including statistics, probability, linear algebra, and calculus. Excellent communication skills with ability to translate technical concepts to non-technical stakeholders. Experience working in cross-functional teams and managing projects through the full data science lifecycle. Knowledge of ethical considerations in AI development including bias detection and mitigation techniques. Key Responsibilities Analyze complex datasets to extract meaningful insights and patterns using statistical methods and machine learning techniques. Design, develop and implement advanced machine learning models and algorithms to solve business problems and drive data-driven decision making. Perform feature engineering, model selection, and hyperparameter tuning to optimize model performance and accuracy. Create and maintain data processing pipelines for efficient data collection, cleaning, transformation, and integration. Collaborate with cross-functional teams to understand business requirements and translate them into analytical solutions. Evaluate model performance using appropriate metrics and validation techniques to ensure reliability and robustness. Present findings, visualizations, and recommendations to stakeholders in clear, accessible formats tailored to technical and non-technical audiences. Stay current with the latest advancements in machine learning, deep learning, and statistical methods through continuous learning and research. Develop proof-of-concept applications to demonstrate the value and feasibility of data science solutions. Implement A/B testing and experimental design methodologies to validate hypotheses and measure the impact of implemented solutions. Document methodologies, procedures, and results thoroughly to ensure reproducibility and knowledge transfer within the organization.

Posted 2 days ago

Apply

5.0 - 7.0 years

5 - 9 Lacs

Gurugram

Work from Office

Assist in building out the backlog of Power BI dashboards, ensuring they meet business requirements and provide actionable insights. Collect and maintain a firmwide inventory of existing reports, identifying those that need to be converted to Power BI. Collaborate with the team to contract and integrate Snowflake, ensuring seamless data flow and accessibility for reporting and analytics. Desired Skills and experience Candidates should have a B.E./B.Tech/MCA/MBA in Information Systems, Computer Science or a related field 3+ years strong experience in developing and managing Power BI dashboards and reports, preferably within the financial services industry. Experience required in Data Warehousing, SQL, and hands-on expertise in ETL/ELT processes. Familiarity with Snowflake data warehousing solutions and integration. Proficiency in data integration from various sources including APIs and databases. Proficient in SQL for querying and manipulating data. Strong understanding of data warehousing concepts and practices. Experience with deploying and managing dashboards on a Power BI server to service a large number of users. Familiarity with other BI tools and platforms. Experience with financial datasets and understanding Private equity metrics. Knowledge of cloud platforms, particularly Azure, Snowflake, and Databricks. Excellent problem-solving skills and attention to detail. Strong communication skills, both written and oral, with a business and technical aptitude Must possess good verbal and written communication and interpersonal skills Key Responsibilities Create and maintain interactive and visually appealing Power BI dashboards to visualize data insights. Assist in building out the backlog of Power BI dashboards, ensuring they meet business requirements and provide actionable insights. Integrate data from various sources including APIs, databases, and cloud storage solutions such as Azure, Snowflake, and Databricks. Collect and maintain a firmwide inventory of existing reports, identifying those that need to be converted to Power BI. Collaborate with the team to contract and integrate Snowflake, ensuring seamless data flow and accessibility for reporting and analytics. Continuously refine and improve the user interface of dashboards based on ongoing input and feedback. Monitor and optimize the performance of dashboards to handle large volumes of data efficiently. Work closely with stakeholders to understand their reporting needs and translate them into effective Power BI solutions. Ensure the accuracy and reliability of data within Power BI dashboards and reports. Deploy dashboards onto a Power BI server to be serviced to a large number of users, ensuring high availability and performance. Ensure that dashboards provide self-service capabilities and are interactive for end-users. Create detailed documentation of BI processes and provide training to internal teams and clients on Power BI usage Stay updated with the latest Power BI and Snowflake features and best practices to continuously improve reporting capabilities. Behavioral Competencies Effectively communicate with business and technology partners, peers and stakeholders Ability to deliver results under demanding timelines to real-world business problems Ability to work independently and multi-task effectively Identify and communicate areas for improvement Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)

Posted 2 days ago

Apply

7.0 - 12.0 years

27 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

We’re hiring Databricks Developers skilled in PySpark & SQL for cloud-based projects. Multiple positions are open based on experience level. Email: Anita.s@liveconnections.in *JOB AT HYDERABAD, MUMBAI, PUNE* Required Candidate profile Exciting walk-in drive on Aug 2 across Mumbai, Pune & Hyderabad. Shape the future with data 7–12 yrs total exp with 3–5 yrs in Databricks (Azure/AWS). Must know PySpark & SQL.

Posted 2 days ago

Apply

5.0 - 8.0 years

10 - 15 Lacs

Noida, Pune, Bengaluru

Hybrid

AWS Databricks Developer: Mandate Skills: AWS, Databricks, Python, SQL Responsibility: Designing and implementing scalable data pipelines using Databricks and Apache Spark. Proficiency in programming languages such as Python and SQL. Analysing and processing large datasets to uncover actionable insights. Integrating data flows across AWS services to ensure seamless connectivity. Collaborating with cross-functional teams to streamline data operations and workflows. Exp..-5-8YRS Location: Mumbai/Bangalore/Pune/Chennai/Hyderabad/Indore/Kolkata/Noida/Coimbatore/Bhuvaneswar Notice Period : 0-30Days Interested Candidates share your cv at Muktai.S@alphacom.in

Posted 2 days ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Senior Infrastructure Architect Would being part of a digital transformation excite you Are you passionate about infrastructure security Join our digital transformation team We operate at the heart of the digital transformation of our business Our team is responsible for the cybersecurity, architecture and data protection for our global organization We advise on the design and validation of all systems, infrastructure, technologies and data protection, Partner the best As a Senior Infrastructure Architect, you will be responsible for: Participate in the domain technical and business discussions relative to future architect direction, Assist in the analysis, design and development of a roadmap and implementation based upon a current vs future state in a cohesive architecture viewpoint, Gather and analyze data and develop architectural requirements at project level, Participate in the infrastructure architecture governance model, Support design and deployment of infrastructure solutions meeting standardization, consolidation, TCO, security, regulatory compliance and application system qualities, for different businesses, Research and evaluate emerging technology, industry and market trends to assist in project development and/or operational support activities, Coach and mentor team members Fuel your passion To be successful in this role you will: Bachelor's Degree A minimum 8 years of professional experience, Have an experience in Azure infra services and automating deployments Have an experience working in DevOps and Data bricks Have hands on experience working with database technologies, including ETL tools including Databricks Workflows using Pyspark / Python, and an ability to learn new technologies, Have strong proficiency in writing and optimizing SQL queries and working with databases, Skilled level expertise in design of computing or network or storage to meet business application system qualities Understands technical and business discussions relative to future architecture direction aligning with business goals, Understands concepts of setting and driving architecture direction, Familiar with elements of gathering architecture requirements, Understands architecture standards concepts to apply to project work, Work in a way that works for you We recognize that everyone is different and that the way in which people want to work and deliver at their best is different for everyone too In this role, we can offer the following flexible working patterns: Working remotely from home or any other work location Flexibility in your work schedule to help fit in around life! Talk to us about your desired flexible working options when you apply Working with us Our people are at the heart of what we do at Baker Hughes We know we are better when all of our people are developed, engaged and able to bring their whole authentic selves to work We invest in the health and well-being of our workforce, train and reward talent and develop leaders at all levels to bring out the best in each other, About Us: We are an energy technology company that provides solutions to energy and industrial customers worldwide Built on a century of experience and conducting business in over 120 countries, our innovative technologies and services are taking energy forward making it safer, cleaner and more efficient for people and the planet, Join Us: Are you seeking an opportunity to make a real difference in a company that values innovation and progressJoin us and become part of a team of people who will challenge and inspire you! Lets come together and take energy forward, Baker Hughes Company is an Equal Opportunity Employer Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law, R139742 Show

Posted 2 days ago

Apply

1.0 - 7.0 years

5 - 9 Lacs

Bengaluru

Work from Office

At Warner Music Group, Were a Global Collective Of Music Makers And Music Lovers, Tech Innovators And Inspired Entrepreneurs, Game-changing Creatives And Passionate Team Members Here, We Turn Dreams Into Stardom And Audiences Into Fans We Are Guided By Three Core Values That Underpin Everything We Do Across All Our Diverse Businesses Curiosity: We do our best work when were immersing ourselves in culture and breaking through barriers Curiosity is the driving force behind creativity and ingenuity It fuels innovation, and innovation is the key to our future, Collaboration: Making music and bringing it to the world is all about the power of originality amplified by teamwork A great idea, like a great song, travels globally We ignite passions and build connections across our diverse community of artists, songwriters, partners, and fans, Commitment: We pursue excellence for our team and our talent Everything in music starts with a leap into the unknown, and were committed to keeping the faith, acting with integrity, and delivering on our promises, Technology is one of the most important parts of our business Whether its signing up new artists; ensuring we provide the right data to Spotify, YouTube, and other digital service providers; or helping artists use the latest AI tools and make thoughtful decisions with data-driven insights technology plays an invaluable role in our success The engineering team at Warner Music Group makes all of it a reality, WMG is home to a wide range of artists, musicians, and songwriters that fuel our success That is why we are committed to creating a work environment that actively values, appreciates, and respects everyone We encourage applications from people with a wide variety of backgrounds and experiences, Consider a career at WMG and get the best of both worlds an innovative global music company that retains the creative spirit of a nimble independent, Your Role We are building the next-generation Data Platform that sets the standard for freshness, accuracy, comprehensiveness, and ease of use to power Warner Music Groups business This is a unique opportunity to be a part of a brand new, high-performing engineering center of excellence and drive significant impact for WMG's technology initiatives This role will collaborate closely with our North American Engineering teams to synchronize strategies, processes, and objectives This is a hybrid position that requires you to work onsite at our Bangalore office a few days per week We are reimagining this platform from the ground up, including: How we store, represent, ingest and serve factual data about artists, songwriters, and their works How we ingest, process, ETL, and serve data about music consumption across all digital platforms we partner with How we represent the relationships between artists, songwriters, and their works to power next-generation search and discovery for our analysts and business partners, Our Data Platform is the foundation of all we do at Warner Music Group, feeding core business processes like marketing optimization, performance tracking, and rights acquisition and distribution; next-generation capabilities like artist-fan connection, trend analysis and other machine learning powered applications; and even advanced capabilities like generative AI applications in music, We need strong engineers who love music, data, and building world-class systems that scale to solve data problems, Responsibilities Reimagine and implement the future of tech for the music industry, building an all new codebase Work as part of a dynamic and highly effective team Own the creation and delivery of highly innovative products Learn and grow as a professional through close collaboration with your team members and engineering leaders, and by being part of culture of continuous improvement and learning About You You have an undergraduate or graduate degree in Computer Science, Computer Engineering, or other related field You have at least 6 years of experience in backend development or data engineering You have built or developed large-scale data processing pipelines and/or large, high-availability dimensional datastores Experience with Snowflake or Databricks is a plus You are passionate about music and have a deep desire to provide the data that will help bring more great music into the world You have a high sense of ownership and a drive to deliver impact in a fast-paced, evolving, ambiguous environment You have a drive to grow, learn, and master the craft of software development As the home to 10K Projects, Asylum, Atlantic Music Group, East West, FFRR, Fueled by Ramen, Nonesuch, Parlophone, Rhino, Roadrunner, Sire, Warner Records, Warner Classics, and several other of the worlds premier recording labels, Warner Music Group champions emerging artists and global superstars alike And our renowned publishing company, Warner Chappell Music, represents genre-spanning songwriters and producers through a catalog of more than one million copyrights worldwide Warner Music Group is also home to ADA, which supports the independent community, as well as artist services division WMX In addition, WMG counts film and television storytelling powerhouse Warner Music Entertainment among its many brands, Together, we are Warner Music Group: Independent Minds Major Sound, Love this job and want to apply Click the ?Apply? link at the top of the page, or apply directly with your LinkedIn Applying with LinkedIn will import all of the information you put in your profile, but will still allow you to upload a resume and cover letter, Dont be discouraged if you dont hear from us right away Were taking our time to review all resumes, and to find the best people for WMG, Thanks for your interest in working for WMG We love it here, and think you will, too, WMG is committed to inclusion and diversity in all aspects of our business We are proud to be an equal opportunity workplace and will evaluate qualified applicants without regard to race, religious creed, color, age, sex, sexual orientation, gender, gender identity, gender expression, national origin, ancestry, marital status, medical condition as defined by state law (genetic characteristics or cancer), physical or mental disability, military service or veteran status, pregnancy, childbirth and related medical conditions, genetic information or any other characteristic protected by applicable federal, state or local law, Copyright 2025 Warner Music Inc, Show

Posted 2 days ago

Apply

8.0 - 12.0 years

14 - 20 Lacs

Bengaluru

Work from Office

Azure Data Engineer Experience in Azure Data Factory, Databricks, Azure data lake and Azure SQL Server. Developed ETL/ELT process using SSIS and/or Azure Data Factory. Build complex pipelines & dataflows using Azure Data Factory. Designing and implementing data pipelines using in Azure Data Factory (ADF). Improve functionality/ performance of existing data pipelines. Performance tuning processes dealing with very large data sets. Configuration and Deployment of ADF packages. Proficient of the usage of ARM Template, Key Vault, Integration runtime. Adaptable to work with ETL frameworks and standards. Strong analytical and troubleshooting skill to root cause issue and find solution. Propose innovative, feasible and best solutions for the business requirements. Knowledge on Azure technologies / services such as Blob storage, ADLS, Logic Apps, Azure SQL, Web Jobs.. Expert in Service now , Incidents ,JIRA. Should have exposure agile methodology. Expert in understanding , building powerBI reports using latest methodologies

Posted 2 days ago

Apply

9.0 - 14.0 years

20 - 35 Lacs

Noida

Remote

Job Title: Data bricks SME Engineer Work Timing: US EST Hours Experience: 8+ years Location: Remote Job Responsibilities: Databricks Hands on SME Engineer Architect, configure, and optimize Databricks Pipelines for large-scale data processing within an Azure Data Lakehouse environment. Set up and manage Azure infrastructure components including Databricks Workspaces, Azure Containers (AKS/ACI), Storage Accounts, and Networking. Design and implement a monitoring and observability framework using tools like Azure Monitor, Log Analytics, and Prometheus/Grafana. Collaborate with platform and data engineering teams to enable micro services-based architecture for scalable and modular data solutions. Drive automation and CI/CD practices using Terraform, ARM templates, and GitHub Actions/Azure DevOps. Required Skills & Experience: Strong hands-on experience with Azure Data bricks, Delta Lake, and Apache Spark. Deep understanding of Azure services: Resource Manager, AKS, ACR, Key Vault, and Networking. Proven experience in micro services architecture and container orchestration. Expertise in infrastructure-as-code, scripting (Python, Bash), and DevOps tooling. Familiarity with data governance, security, and cost optimization in cloud environments. Bonus: Experience with event-driven architectures (Kafka/Event Grid). Knowledge of data mesh principles and distributed data ownership. Interested candidates can apply: dsingh15@fcsltd.com

Posted 2 days ago

Apply

6.0 - 11.0 years

12 - 17 Lacs

Pune

Work from Office

Roles and Responsibility The Senior Tech Lead - Databricks leads the design, development, and implementation of advanced data solutions. Has To have extensive experience in Databricks, cloud platforms, and data engineering, with a proven ability to lead teams and deliver complex projects. Responsibilities: Lead the design and implementation of Databricks-based data solutions. Architect and optimize data pipelines for batch and streaming data. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and deliverables. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in Databricks environments. Stay updated on the latest Databricks features and industry trends. Key Technical Skills & Responsibilities Experience in data engineering using Databricks or Apache Spark-based platforms. Proven track record of building and optimizing ETL/ELT pipelines for batch and streaming data ingestion. Hands-on experience with Azure services such as Azure Data Factory, Azure Data Lake Storage, Azure Databricks, Azure Synapse Analytics, or Azure SQL Data Warehouse. Proficiency in programming languages such as Python, Scala, SQL for data processing and transformation. Expertise in Spark (PySpark, Spark SQL, or Scala) and Databricks notebooks for large-scale data processing. Familiarity with Delta Lake, Delta Live Tables, and medallion architecture for data lakehouse implementations. Experience with orchestration tools like Azure Data Factory or Databricks Jobs for scheduling and automation. Design and implement the Azure key vault and scoped credentials. Knowledge of Git for source control and CI/CD integration for Databricks workflows, cost optimization, performance tuning. Familiarity with Unity Catalog, RBAC, or enterprise-level Databricks setups. Ability to create reusable components, templates, and documentation to standardize data engineering workflows is a plus. Ability to define best practices, support multiple projects, and sometimes mentor junior engineers is a plus. Must have experience of working with streaming data sources and Kafka (preferred) Eligibility Criteria: Bachelors degree in Computer Science, Data Engineering, or a related field Extensive experience with Databricks, Delta Lake, PySpark, and SQL Databricks certification (e.g., Certified Data Engineer Professional) Experience with machine learning and AI integration in Databricks Strong understanding of cloud platforms (AWS, Azure, or GCP) Proven leadership experience in managing technical teams Excellent problem-solving and communication skills Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences Attractive Salary Hybrid work culture

Posted 3 days ago

Apply

4.0 - 9.0 years

9 - 13 Lacs

Pune

Work from Office

Your Position You will work as a Data Engineer with Machine Learning expertise in the Predictive Maintenance team. This hybrid and multi-cultural team includes Data Scientists, Machine Learning Engineers, Data Engineers, a DevOps Engineer, a QA Engineer, an Architect, a UX Designer, a Scrum Master, and a Product Owner. The Digital Service Platform focuses on optimizing customer asset usage and maintenance, impacting performance, cost, and sustainability KPIs by extending component lifetimes. In your role, you will: Participate in solution design discussions led by our Product Architect, where your input as a Data Engineer with ML expertise is highly valued. Collaborate with IT and business SMEs to ensure delivery of high-quality end-to-end data and machine learning pipelines. Your Responsibilities Data Engineering Develop, test, and document data (collection and processing) pipelines for Predictive Maintenance solutions, including data from (IoT) sensors and control components to our data platform. Build scalable pipelines to transform, aggregate, and make data available for machine learning models. Align implementation efforts with other back-end developers across multiple development teams. Machine Learning Integration Collaborate with Data Scientists to integrate machine learning models into production pipelines, ensuring smooth deployment and scalability. Develop and optimize end-to-end machine learning pipelines (MLOps) from data preparation to model deployment and monitoring. Work on model inference pipelines, ensuring efficient real-time predictions from deployed models. Implement automated retraining workflows and ensure version control for datasets and models. Continuous Improvement Contribute to the design and build of a CI/CD pipeline , including integration test automation for data and ML pipelines. Continuously improve and standardize data and ML services for customer sites to reduce project delivery time. Actively monitor model performance and ensure timely updates or retraining as needed. Your Profile Minimum 4 years' experience building complex data pipelines and integrating machine learning solutions. Bachelor's or Master's degree in Computer Science, IT, Data Science, or equivalent. Hands-on experience with data modeling and machine learning workflows . Strong programming skills in Java , Scala , and Python (preferred for ML tasks). Experience with stream processing frameworks (e.g., Spark) and streaming storage (e.g., Kafka). Proven experience with MLOps practices, including data preprocessing, model deployment, and monitoring. Familiarity with ML frameworks and tools (e.g., TensorFlow, PyTorch, MLflow). Proficient in cloud platforms (preferably Azure and Databricks). Experience with data quality management , monitoring, and ensuring robust pipelines. Knowledge of Predictive Maintenance model development is a strong plus. What Youll Gain Opportunity to work at the forefront of data-driven innovation in a global organization. Collaborate with a talented and diverse team to design and implement cutting-edge solutions. Expand your expertise in data engineering and machine learning in a real-world industrial setting. If you are passionate about leveraging data and machine learning to drive innovation, wed love to hear from you!

Posted 3 days ago

Apply

8.0 - 10.0 years

20 - 35 Lacs

Ahmedabad

Remote

We are seeking a talented and experienced Senior Data Engineer to join our team and contribute to building a robust data platform on Azure Cloud. The ideal candidate will have hands-on experience designing and managing data pipelines, ensuring data quality, and leveraging cloud technologies for scalable and efficient data processing. The Data Engineer will design, develop, and maintain scalable data pipelines and systems to support the ingestion, transformation, and analysis of large datasets. The role requires a deep understanding of data workflows, cloud platforms (Azure), and strong problem-solving skills to ensure efficient and reliable data delivery. Key Responsibilities Data Ingestion and Integration: Develop and maintain data ingestion pipelines using tools like Azure Data Factory , Databricks , and Azure Event Hubs . Integrate data from various sources, including APIs, databases, file systems, and streaming data. ETL/ELT Development: Design and implement ETL/ELT workflows to transform and prepare data for analysis and storage in the data lake or data warehouse. Automate and optimize data processing workflows for performance and scalability. Data Modeling and Storage: Design data models for efficient storage and retrieval in Azure Data Lake Storage and Azure Synapse Analytics . Implement best practices for partitioning, indexing, and versioning in data lakes and warehouses. Quality Assurance: Implement data validation, monitoring, and reconciliation processes to ensure data accuracy and consistency. Troubleshoot and resolve issues in data pipelines to ensure seamless operation. Collaboration and Documentation: Work closely with data architects, analysts, and other stakeholders to understand requirements and translate them into technical solutions. Document processes, workflows, and system configurations for maintenance and onboarding purposes. Cloud Services and Infrastructure: Leverage Azure services like Azure Data Factory , Databricks , Azure Functions , and Logic Apps to create scalable and cost-effective solutions. Monitor and optimize Azure resources for performance and cost management. Security and Governance: Ensure data pipelines comply with organizational security and governance policies. Implement security protocols using Azure IAM, encryption, and Azure Key Vault. Continuous Improvement: Monitor existing pipelines and suggest improvements for better efficiency, reliability, and scalability. Stay updated on emerging technologies and recommend enhancements to the data platform. Skills Strong experience with Azure Data Factory , Databricks , and Azure Synapse Analytics . Proficiency in Python , SQL , and Spark . Hands-on experience with ETL/ELT processes and frameworks. Knowledge of data modeling, data warehousing, and data lake architectures. Familiarity with REST APIs, streaming data (Kafka, Event Hubs), and batch processing. Good To Have: Experience with tools like Azure Purview , Delta Lake , or similar governance frameworks. Understanding of CI/CD pipelines and DevOps tools like Azure DevOps or Terraform . Familiarity with data visualization tools like Power BI . Competency Analytical Thinking Clear and effective communication Time Management Team Collaboration Technical Proficiency Supervising Others Problem Solving Risk Management Organizing & Task Management Creativity/innovation Honesty/Integrity Education: Bachelors degree in Computer Science, Data Science, or a related field. 8+ years of experience in a data engineering or similar role.

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

Genpact is a global professional services and solutions firm committed to delivering outcomes that help shape the future. With a team of over 125,000 individuals across 30+ countries, we are driven by curiosity, entrepreneurial agility, and a desire to create lasting value for our clients. Our purpose, the relentless pursuit of a world that works better for people, empowers us to serve and transform leading enterprises, including the Fortune Global 500, utilizing our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently looking for a Principal Consultant - Data Scientist specializing in Azure Generative AI & Advanced Analytics. As a highly skilled and experienced professional, you will be responsible for developing and optimizing AI/ML models, analyzing complex datasets, and providing strategic recommendations for embedding models and Generative AI applications. Your role will be crucial in driving AI-driven insights and automation within our business. Responsibilities: - Collaborate with cross-functional teams to identify, analyze, and interpret complex datasets for actionable insights and data-driven decision-making. - Design, develop, and implement Generative AI solutions leveraging various platforms including AWS Bedrock, Azure OpenAI, Azure Machine Learning, and Cognitive Services. - Utilize Azure Document Intelligence to extract and process structured and unstructured data from diverse document sources. - Build and optimize data pipelines to efficiently process and analyze large-scale datasets. - Implement Agentic AI techniques to develop intelligent, autonomous systems capable of making decisions and taking actions. - Research, evaluate, and recommend embedding models, language models, and generative models for diverse business use cases. - Continuously monitor and assess the performance of AI models and data-driven solutions, refining and optimizing them as necessary. - Stay updated with the latest industry trends, tools, and technologies in data science, AI, and generative models to enhance existing solutions and develop new ones. - Mentor and guide junior team members to aid in their professional growth and skill development. - Ensure model explainability, fairness, and compliance with responsible AI principles. - Keep abreast of advancements in AI, ML, and data science and apply best practices to enhance business operations. Minimum Qualifications / Skills: - Bachelor's or Master's degree in Computer Science, Data Science, AI, Machine Learning, or a related field. - Experience in data science, machine learning, AI applications, generative AI prompt engineering, and creating custom models. - Proficiency in Python, TensorFlow, PyTorch, PySpark, Scikit-learn, and MLflow. - Hands-on experience with Azure AI services (Azure OpenAI, Azure Document Intelligence, Azure Machine Learning, Azure Synapse, Azure Data Factory, Data Bricks, RAG Pipeline). - Expertise in LLMs, transformer architectures, and embeddings. - Experience in building and optimizing end-to-end data pipelines. - Familiarity with vector databases, FAISS, Pinecone, and knowledge retrieval techniques. - Knowledge of Reinforcement Learning (RLHF), fine-tuning LLMs, and prompt engineering. - Strong analytical skills with the ability to translate business requirements into AI/ML solutions. - Excellent problem-solving, critical thinking, and communication skills. - Experience with cloud-native AI deployment, containerization (Docker, Kubernetes), and MLOps practices is advantageous. Preferred Qualifications / Skills: - Experience with multi-modal AI models and computer vision applications. - Exposure to LangChain, Semantic Kernel, RAG (Retrieval-Augmented Generation), and knowledge graphs. - Certifications in Microsoft Azure AI, Data Science, or ML Engineering. Job Title: Principal Consultant Location: India-Noida Schedule: Full-time Education Level: Bachelor's / Graduation / Equivalent Job Posting: Apr 11, 2025, 9:36:00 AM Unposting Date: May 11, 2025, 1:29:00 PM Master Skills List: Digital Job Category: Full Time,

Posted 3 days ago

Apply

5.0 - 9.0 years

15 - 25 Lacs

Pune, Gurugram

Hybrid

About the Role: We are looking for a skilled Data Engineer with strong experience in Databricks to join our data team. You will play a key role in building scalable data pipelines, optimizing data workflows, and supporting data analytics and machine learning initiatives. You should have solid experience in working with big data technologies, cloud platforms, and data warehousing solutions. Key Responsibilities: Design, develop, and optimize scalable ETL/ELT data pipelines using Apache Spark on Databricks Collaborate with data scientists, analysts, and business stakeholders to gather data requirements and deliver clean, reliable datasets Build and manage data models, data lakes, and data warehouses (preferably in Delta Lake architecture ) Implement data quality checks, monitoring, and alerting systems Optimize performance of data workflows in Databricks and ensure cost-efficient use of cloud resources Integrate with various structured and unstructured data sources (APIs, databases, files, etc.) Contribute to best practices and standards for data engineering and cloud development Required Qualifications: Bachelor's or Masters degree in Computer Science, Engineering, or a related field 5+ years of experience as a Data Engineer or similar role Hands-on experience with Databricks and Apache Spark Strong programming skills in Python and/or Scala Proficient in SQL and data modeling (e.g., star/snowflake schemas) Experience with cloud platforms like Azure , AWS , or GCP (Azure preferred if using Azure Databricks) Solid understanding of Delta Lake , Lakehouse architecture , and data governance Preferred Skills: Experience with CI/CD for data pipelines Knowledge of MLflow , Unity Catalog , or Databricks Workflows Familiarity with orchestration tools like Airflow , Azure Data Factory , or Prefect Exposure to BI tools like Power BI , Tableau , or Looker Experience with real-time data streaming tools (e.g., Kafka , Event Hubs ) What We Offer: Competitive salary and performance-based bonuses Flexible working arrangements Opportunities to work with cutting-edge data technologies A collaborative and innovative work environment

Posted 3 days ago

Apply

3.0 - 7.0 years

4 - 8 Lacs

Pune

Work from Office

As a data engineer, you will be responsible for delivering data intelligence solutions to our customers all around the globe, based on an innovative product, which provides insights into the performance of their material handling systems. You will be working on implementing and deploying the product as well as designing solutions to fit it to our customer needs. You will work together with an energetic and multidisciplinary team to build end-to-end data ingestion pipelines and implement and deploy dashboards. Your tasks and responsibilities You will design and implement data & dashboarding solutions to maximize customer value. You will deploy and automate the data pipelines and dashboards to enable further project implementation. You embrace working in an international , diverse team, with an open and respectful atmosphere. You leverage data by making it available for other teams within our department as well to enable our platform vision . Communicate and work closely with other groups within Vanderlande and the project team. You enjoy an independent and self-reliant way of working with a proactive style of communication to take ownership to provide the best possible solution. You will be part of an agile team that encourages you to speak up freely about improvements, concerns, and blockages. As part of Scrum methodology , you will independently create stories and participate in the refinement process. You collect feedback and always search for opportunities to improve the existing standardized product. Execute projects from conception through client handover with a positive contribution on technical performance and the organization. You will take the lead in communication with different stakeholders that are involved in the projects that are being deployed. Your p rofile Bachelor's or master's degree in computer science, IT, or equivalent and a minimum of 6 + years of experience building and deploying complex data pipelines and data solutions. Experience developing end to end data pipelines using technologies like Databricks . E xperience with visualization software, preferably Splunk (or else PowerBI , Tableau, or similar). Strong experience with SQL & Python, with hands-on experience in data modeling . Hands-on experience with programming in Python or Java , and proficiency in Test-Driven Development using pytest . Experience with P yspark or S park SQL to deal with distributed data. Experience with d ata s chemas ( e.g. JSON/XML/Avro) . Experience in d eploying services as containers ( e.g. Docker , Podman ) . Experience in w orking with cloud services (preferably with Azure) . Experience with s treaming and/or batch storage ( e.g. Kafka, Oracle) is a plus . Experience in creating API s is a plus . Experience in guiding, motivating and training engineers. Experience in data quality management and monitoring is a plus. Strong communication skills in English. Skilled at breaking down large problems into smaller, manageable parts.

Posted 3 days ago

Apply

Exploring Data Bricks Jobs in India

The data bricks job market in India is flourishing, with a high demand for professionals skilled in data bricks technology. Companies across various industries are leveraging data bricks to manage and analyze their data effectively. Job seekers with expertise in data bricks can explore a multitude of exciting career opportunities in India.

Top Hiring Locations in India

Here are the top 5 major cities actively hiring for data bricks roles in India: - Bangalore - Pune - Hyderabad - Chennai - Mumbai

Average Salary Range

The average salary range for data bricks professionals in India varies based on experience levels. Entry-level positions can expect a salary ranging from INR 4-6 lakhs per annum, while experienced professionals can earn up to INR 15-20 lakhs per annum.

Career Path

A typical career progression in data bricks may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually progressing to roles like Data Engineer, Data Architect, or Data Scientist.

Related Skills

In addition to expertise in data bricks, professionals in this field are often expected to have skills in: - Apache Spark - Python - SQL - Data warehousing - Data visualization tools

Interview Questions

  • What is Apache Spark and how does it relate to data bricks? (basic)
  • Explain the difference between data bricks and traditional Hadoop. (medium)
  • How do you optimize performance in data bricks? (medium)
  • Describe the architecture of data bricks. (medium)
  • What are the advantages of using data bricks for data processing? (basic)
  • How do you handle missing or corrupt data in data bricks? (medium)
  • Explain the concept of lazy evaluation in data bricks. (advanced)
  • What are the different deployment modes in data bricks? (medium)
  • How do you tune the performance of Spark jobs in data bricks? (advanced)
  • Can you explain the concept of lineage in data bricks? (medium)
  • How does data bricks ensure data reliability? (medium)
  • What is the significance of the driver and executor in data bricks? (basic)
  • Explain the concept of partitions in data bricks. (basic)
  • How do you handle schema evolution in data bricks? (advanced)
  • What is the role of the shuffle operation in data bricks? (medium)
  • How do you debug performance issues in data bricks jobs? (advanced)
  • Explain the concept of caching in data bricks. (basic)
  • How does data bricks support real-time data processing? (medium)
  • Can you explain the difference between RDD, DataFrame, and Dataset in Spark? (advanced)
  • What are the various data sources supported by data bricks? (basic)
  • How do you handle skewed data in data bricks? (advanced)
  • Explain the use of checkpoints in data bricks. (medium)
  • How do you perform data transformation in data bricks? (basic)
  • What are the different ways to monitor and manage clusters in data bricks? (medium)

Closing Remark

As you embark on your journey to explore data bricks jobs in India, remember to equip yourself with the necessary skills and knowledge to stand out in the competitive job market. Prepare diligently, showcase your expertise confidently, and seize the exciting opportunities that await you in the realm of data bricks. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies