Home
Jobs

3318 Databricks Jobs - Page 39

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

0 Lacs

Kanayannur, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure data ecosystem. You will play a crucial role in managing and optimizing data workflows across Azure platforms such as Azure Data Factory, Data Lake, Databricks, and Synapse. Your primary focus will be on building, maintaining, and monitoring data pipelines, ensuring high data quality, and supporting critical data operations. You'll also support visualization, automation, and CI/CD processes to streamline data delivery and reporting. Your Key Responsibilities Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure Data Factory (ADF), Databricks, and Azure Synapse for efficient data ingestion, transformation, and storage. ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources. Data Lake Management: Organize and manage structured and unstructured data in Azure Data Lake, ensuring performance and security best practices. Data Quality & Validation: Perform data profiling, validation, and transformation using SQL, PySpark, and Python to ensure data integrity. Monitoring & Troubleshooting: Use logging and monitoring tools to troubleshoot failures in pipelines and address data latency or quality issues. Reporting & Visualization: Work with Power BI or Tableau teams to support dashboard development, ensuring the availability of clean and reliable data. DevOps & CI/CD: Support data deployment pipelines using Azure DevOps, Git, and CI/CD practices for version control and automation. Tool Integration: Collaborate with cross-functional teams to integrate Informatica CDI or similar ETL tools with Azure components for seamless data flow. Collaboration & Documentation: Partner with data analysts, engineers, and business stakeholders, while maintaining SOPs and technical documentation for operational efficiency. Skills and attributes for success Strong hands-on experience in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks Solid understanding of ETL/ELT design and implementation principles Strong SQL and PySpark skills for data transformation and validation Exposure to Python for automation and scripting Familiarity with DevOps concepts, CI/CD workflows, and source control systems (Azure DevOps preferred) Experience in working with Power BI or Tableau for data visualization and reporting support Strong problem-solving skills, attention to detail, and commitment to data quality Excellent communication and documentation skills to interface with technical and business teamsStrong knowledge of asset management business operations, especially in data domains like securities, holdings, benchmarks, and pricing. To qualify for the role, you must have 4–6 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Azure Databricks: Experience in data transformation and processing using notebooks and Spark. Azure Data Lake: Experience working with hierarchical data storage in Data Lake. Azure Synapse: Familiarity with distributed data querying and data warehousing. Azure Data factory: Hands-on experience in orchestrating and monitoring data pipelines. ETL Process Understanding: Knowledge of data extraction, transformation, and loading workflows, including data cleansing, mapping, and integration techniques. Good to have Power BI or Tableau for reporting support Monitoring/logging using Azure Monitor or Log Analytics Azure DevOps and Git for CI/CD and version control Python and/or PySpark for scripting and data handling Informatica Cloud Data Integration (CDI) or similar ETL tools Shell scripting or command-line data SQL (across distributed and relational databases) What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure data ecosystem. You will play a crucial role in managing and optimizing data workflows across Azure platforms such as Azure Data Factory, Data Lake, Databricks, and Synapse. Your primary focus will be on building, maintaining, and monitoring data pipelines, ensuring high data quality, and supporting critical data operations. You'll also support visualization, automation, and CI/CD processes to streamline data delivery and reporting. Your Key Responsibilities Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure Data Factory (ADF), Databricks, and Azure Synapse for efficient data ingestion, transformation, and storage. ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources. Data Lake Management: Organize and manage structured and unstructured data in Azure Data Lake, ensuring performance and security best practices. Data Quality & Validation: Perform data profiling, validation, and transformation using SQL, PySpark, and Python to ensure data integrity. Monitoring & Troubleshooting: Use logging and monitoring tools to troubleshoot failures in pipelines and address data latency or quality issues. Reporting & Visualization: Work with Power BI or Tableau teams to support dashboard development, ensuring the availability of clean and reliable data. DevOps & CI/CD: Support data deployment pipelines using Azure DevOps, Git, and CI/CD practices for version control and automation. Tool Integration: Collaborate with cross-functional teams to integrate Informatica CDI or similar ETL tools with Azure components for seamless data flow. Collaboration & Documentation: Partner with data analysts, engineers, and business stakeholders, while maintaining SOPs and technical documentation for operational efficiency. Skills and attributes for success Strong hands-on experience in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks Solid understanding of ETL/ELT design and implementation principles Strong SQL and PySpark skills for data transformation and validation Exposure to Python for automation and scripting Familiarity with DevOps concepts, CI/CD workflows, and source control systems (Azure DevOps preferred) Experience in working with Power BI or Tableau for data visualization and reporting support Strong problem-solving skills, attention to detail, and commitment to data quality Excellent communication and documentation skills to interface with technical and business teamsStrong knowledge of asset management business operations, especially in data domains like securities, holdings, benchmarks, and pricing. To qualify for the role, you must have 4–6 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Azure Databricks: Experience in data transformation and processing using notebooks and Spark. Azure Data Lake: Experience working with hierarchical data storage in Data Lake. Azure Synapse: Familiarity with distributed data querying and data warehousing. Azure Data factory: Hands-on experience in orchestrating and monitoring data pipelines. ETL Process Understanding: Knowledge of data extraction, transformation, and loading workflows, including data cleansing, mapping, and integration techniques. Good to have Power BI or Tableau for reporting support Monitoring/logging using Azure Monitor or Log Analytics Azure DevOps and Git for CI/CD and version control Python and/or PySpark for scripting and data handling Informatica Cloud Data Integration (CDI) or similar ETL tools Shell scripting or command-line data SQL (across distributed and relational databases) What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

9.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Must have 9+ years of experience in data engineering Proficiency with Databricks and Apache Spark Strong SQL skills and experience with relational databases Experience with big data technologies (e.g., Hadoop, Kafka) Knowledge of data warehousing concepts and ETL processes Experience with CI/CD tools, particularly Jenkins Excellent problem-solving and analytical skills Solid understanding of big data fundamentals and experience with Apache Spark Familiarity with cloud platforms (e.g., AWS, Azure) Experience with version control systems (e.g., BitBucket) Understanding of DevOps principles and tools (e.g., CI/CD, Jenkins) Databricks certification is a plus A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to lead the engagement effort of providing high-quality and value-adding consulting solutions to customers at different stages- from problem definition to diagnosis to solution design, development and deployment. You will review the proposals prepared by consultants, provide guidance, and analyze the solutions defined for the client business problems to identify any potential risks and issues. You will identify change Management requirements and propose a structured approach to client for managing the change using multiple communication mechanisms. You will also coach and create a vision for the team, provide subject matter training for your focus areas, motivate and inspire team members through effective and timely feedback and recognition for high performance. You would be a key contributor in unit-level and organizational initiatives with an objective of providing high-quality, value-adding consulting solutions to customers adhering to the guidelines and processes of the organization. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Design, develop, and maintain scalable data pipelines on Databricks using PySpark Collaborate with data analysts and scientists to understand data requirements and deliver solutions Optimize and troubleshoot existing data pipelines for performance and reliability Ensure data quality and integrity across various data sources Implement data security and compliance best practices Monitor data pipeline performance and conduct necessary maintenance and updates Document data pipeline processes and technical specifications Location of posting - Infosys Ltd. is committed to ensuring you have the best experience throughout your journey with us. We currently have open positions in a number of locations across India - Bangalore, Pune, Hyderabad, Mysore, Kolkata, Chennai, Chandigarh, Trivandrum, Indore, Nagpur, Mangalore, Noida, Bhubaneswar, Coimbatore, Mumbai, Jaipur, Hubli, Vizag. While we work in accordance with business requirements, we shall strive to offer you the location of your choice, where possible. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Key Responsibilities / Tech Skills Working experience in BI tools such as Tableau/Power BI, QlikView, or similar platforms to create interactive and user-friendly reports Responsible for designing, developing, and maintaining complex database systems using Oracle PL/SQL programming and related technologies. Strong understanding of database architecture, performance tuning, and best practices in PL/SQL development. Develop and manage ETL processes to ensure data is accurately extracted, transformed, and loaded into data warehouses. Design and develop BI solutions, including dashboards, reports, and data visualizations. Implement and maintain data models and data warehouses. Design and implementation experience in robust data pipelines using Databricks to process large volumes of data efficiently. Solid understanding in Agile Methodology, CI/CD Pipelines and OKRs. Expertise in debugging code, resolving defects, code quality issues, and familiarity with related tools. Soft Skills Taking initiative, being proactive, and demonstrating ownership while executing development tasks. Ability to clearly communicate and articulate ideas and technical concepts with leads and colleagues. Ability to work well within a team, contributing to team efforts and availability to team. Flexibility to learn and adapt to new technologies, tools, and methodologies as needed. Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Linkedin logo

Description Invent the future with us. Recognized by Fast Company’s 2023 100 Best Workplaces for Innovators List, Ampere is a semiconductor design company for a new era, leading the future of computing with an innovative approach to CPU design focused on high-performance, energy efficient, sustainable cloud computing. By providing a new level of predictable performance, efficiency, and sustainability Ampere is working with leading cloud suppliers and a growing partner ecosystem to deliver cloud instances, servers and embedded/edge products that can handle the compute demands of today and tomorrow. Join us at Ampere and work alongside a passionate and growing team — we’d love to have you apply! About The Role Ampere Computing’s Enterprise Data and AI Team is seeking a Data Engineer proficient in modern data tools within the Azure environment. In this highly collaborative role, you will design, develop, and maintain data pipelines and storage solutions that support our business objectives. This position offers an excellent opportunity to enhance your technical skills, work on impactful projects, and grow your career in data engineering within a supportive and innovative environment. What You’ll Achieve Data Pipeline Development: Design, develop, and maintain data pipelines using Azure technologies such as Azure Data Factory, Azure Databricks, and Azure Synapse Analytics. Data Modeling: Collaborate with senior engineers to create and optimize data models that support business intelligence and analytics requirements. Data Storage Solutions: Implement and manage data storage solutions using Azure Data Lake Storage (Gen 2) and Cosmos DB. Coding and Scripting: Write efficient and maintainable code in Python, Scala, or PySpark for data transformation and processing tasks. Collaboration: Work closely with cross-functional teams to understand data requirements and deliver robust data solutions. Continuous Learning: Stay updated with the latest Azure services and data engineering best practices to continuously enhance technical skills. Support and Maintenance: Provide ongoing support for existing data infrastructure, troubleshoot issues, and implement improvements as needed. Documentation: Document data processes, architecture, and workflows to ensure clarity and maintainability. About You Bachelor's degree in Computer Science, Information Technology, Engineering, Data Science, or a related field. 2+ years of experience in a data-related role. Proficiency with Azure data services (e.g., Databricks, Synapse Analytics, Data Factory, Data Lake Storage Gen2). Working knowledge of SQL and at least one programming language (e.g., Python, Scala, PySpark). Strong analytical and problem-solving skills with the ability to translate complex data into actionable insights. Excellent communication skills, with the ability to explain technical concepts to diverse audiences. Experience with data warehousing concepts, ETL processes, and version control systems (e.g., Git). Familiarity with Agile methodologies. What We’ll Offer At Ampere we believe in taking care of our employees and providing a competitive total rewards package that includes base pay, bonus (i.e., variable pay tied to internal company goals), long-term incentive, and comprehensive benefits. Benefits Highlights Include Premium medical, dental, vision insurance, parental benefits including creche reimbursement, as well as a retirement plan, so that you can feel secure in your health, financial future and child care during work. Generous paid time off policy so that you can embrace a healthy work-life balance Fully catered lunch in our office along with a variety of healthy snacks, energizing coffee or tea, and refreshing drinks to keep you fueled and focused throughout the day. And there is much more than compensation and benefits. At Ampere, we foster an inclusive culture that empowers our employees to do more and grow more. We are passionate about inventing industry leading cloud-native designs that contribute to a more sustainable future. We are excited to share more about our career opportunities with you through the interview process. Ampere is an inclusive and equal opportunity employer and welcomes applicants from all backgrounds. All qualified applicants will receive consideration for employment without regard to race, color, national origin, citizenship, religion, age, veteran and/or military status, sex, sexual orientation, gender, gender identity, gender expression, physical or mental disability, or any other basis protected by federal, state or local law. Show more Show less

Posted 1 week ago

Apply

9.0 - 11.0 years

12 - 15 Lacs

Bengaluru

Hybrid

Naukri logo

Hands-on Data Engineer with strong Databricks expertise in Git/DevOps integration, Unity Catalog governance, and performance tuning of data transformation workloads. Skilled in optimizing pipelines and ensuring secure, efficient data operations.

Posted 1 week ago

Apply

15.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are seeking a dynamic and experienced Senior Systems Engineering Manager with a strong focus on MLOps to lead our engineering teams. The ideal candidate will possess a deep understanding of system engineering principles, a solid grasp of MLOps-related technical stacks, and excellent leadership skills. You will play a key role in driving strategic initiatives, building and organizing engineering units, and collaborating closely with stakeholders to deliver cutting-edge solutions. Responsibilities Lead and mentor engineering teams focused on designing, developing, and maintaining scalable MLOps infrastructure Build and organize organizational structures, including units and sub-units, to maximize team efficiency Collaborate with account teams and customers, delivering impactful presentations and ensuring stakeholder alignment Drive the recruitment, selection, and development of top engineering talent, fostering a culture of growth and innovation Oversee the implementation of CI/CD pipelines, Infrastructure as Code (IaC), and containerization solutions to enhance development workflows Ensure seamless integration and deployment processes using public cloud platforms, enabling scalable and reliable solutions Inspire, motivate, and guide teams to achieve their best potential Requirements Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field 15+ years of proven experience in systems engineering, with a focus on MLOps or related fields Demonstrated leadership experience, including team building, talent management, and strategic planning Strong understanding of Infrastructure as Code (IaC) principles Expertise in CI/CD pipelines and tools Proficiency in Containerization technologies (e.g., Docker, Kubernetes) Hands-on experience with at least one public cloud platform (e.g., AWS, GCP, Azure) Basic knowledge of Machine Learning (ML) and Data Science (DS) concepts Ability to work effectively with diverse teams, coordinate tasks, and nurture talent Experience in building and scaling organizational units and sub-units Excellent communication and presentation skills, with the ability to engage customers and internal stakeholders Nice to have Experience with Databricks, Sagemaker, or MLFlow Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Lead Data Us Cognitio Analytics, founded in 2013, aims to be the preferred provider of AI / ML driven productivity solutions for large enterprises. The company has received awards for its Smart Operations and Total Rewards Analytics Solutions and is dedicated to innovation, R&D, and creating sustained value for clients. Cognitio Analytics has been recognized as a "Great Place to Work" for its commitment to fostering an innovative work environment and employee satisfaction. Our solutions include Total Rewards Analytics powered by Cognitios Total Rewards Data Factory, The Total Rewards Analytics solutions help our clients achieve better outcomes and higher ROI on investments in all kinds of Total Rewards programs. Our smart operations solutions drive productivity in complex operations, such as claims processing, commercial underwriting etc. These solutions, based on proprietary capabilities based on AI, advanced process and task mining, and deep understanding of operations drive effective digital transformation for our clients. Ideal qualifications, skills and experiences we are looking for are : We are actively seeking a talented and results-driven Data Scientist to join our team and take on a leadership role in driving business outcomes through the power of data analytics and insights. Your contributions will be instrumental in making data-informed decisions, identifying growth opportunities, and propelling our organization to new levels of success. Doctorate/Master's/bachelor's degree in data science, Statistics, Computer Science, Mathematics, Economics, commerce or a related field. Minimum of 6 years of experience working as a Data Scientist or in a similar analytical role, with experience leading data science projects and teams. Experience in Healthcare domain with exposure to clinical operations, financial, risk rating, fraud, digital, sales and marketing, and wellness, e-commerce or the ed tech industry is a plus. Proven ability to lead and mentor a team of data scientists, fostering an innovative environment. Strong decision-making and problem-solving skills to guide strategic initiatives. Expertise in programming languages such as Python and R, and proficiency with data manipulation, analysis, and visualization libraries (e.g., pandas, NumPy, Matplotlib, seaborn). Very strong python and exceptional with pandas, NumPy, advanced python (pytest, class, inheritance, docstrings). Deep understanding of machine learning algorithms, model evaluation, and feature engineering. Experience with frameworks like scikit-learn, TensorFlow, or Py torch. Experience of leading a team and handling projects with end-to-end ownership is a must. Deep understanding of ML and Deep Learning is a must. Basis NLP experience is highly valuable. Pyspark experience is highly valuable. Competitive coding experience (LeetCode) is highly valuable. Strong expertise in statistical modelling techniques such as regression, clustering, time series analysis, and hypothesis testing. Experience of building & deploying machine learning models in cloud environment : Microsoft Azure preferred (Databricks, Synapse, Data Factory, etc. Basic MLOPs experience with FastAPIs and experience of docker is highly valuable and AI governance. Ability to understand business objectives, market dynamics, and strategic priorities. Demonstrated experience translating data insights into tangible business outcomes and driving data-informed decision-making. Excellent verbal and written communication skills. Proven experience leading data science projects, managing timelines, and delivering results within deadlines. Strong collaboration skills with the ability to work effectively in cross-functional teams, build relationships, and foster a culture of knowledge sharing and continuous learning. Cognitio Analytics is an equal-opportunity employer. We are committed to a work environment that celebrates diversity. We do not discriminate against any individual based on race, color, sex, national origin, age, religion, marital status, sexual orientation, gender identity, gender expression, military or veteran status, disability, or any factors protected by applicable law. All Cognitio employees are expected to understand and adhere to all Cognitio Security and Privacy related policies in order to protect Cognitio data and our clients data. Our salary ranges are based on paying competitively for our size and industry and are one part of the total compensation package that also includes a bonus plan, equity, benefits, and other opportunities at Cognitio. Individual pay decisions are based on a number of factors, including qualifications for the role, experience level, and skillset. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure data ecosystem. You will play a crucial role in managing and optimizing data workflows across Azure platforms such as Azure Data Factory, Data Lake, Databricks, and Synapse. Your primary focus will be on building, maintaining, and monitoring data pipelines, ensuring high data quality, and supporting critical data operations. You'll also support visualization, automation, and CI/CD processes to streamline data delivery and reporting. Your Key Responsibilities Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure Data Factory (ADF), Databricks, and Azure Synapse for efficient data ingestion, transformation, and storage. ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources. Data Lake Management: Organize and manage structured and unstructured data in Azure Data Lake, ensuring performance and security best practices. Data Quality & Validation: Perform data profiling, validation, and transformation using SQL, PySpark, and Python to ensure data integrity. Monitoring & Troubleshooting: Use logging and monitoring tools to troubleshoot failures in pipelines and address data latency or quality issues. Reporting & Visualization: Work with Power BI or Tableau teams to support dashboard development, ensuring the availability of clean and reliable data. DevOps & CI/CD: Support data deployment pipelines using Azure DevOps, Git, and CI/CD practices for version control and automation. Tool Integration: Collaborate with cross-functional teams to integrate Informatica CDI or similar ETL tools with Azure components for seamless data flow. Collaboration & Documentation: Partner with data analysts, engineers, and business stakeholders, while maintaining SOPs and technical documentation for operational efficiency. Skills and attributes for success Strong hands-on experience in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks Solid understanding of ETL/ELT design and implementation principles Strong SQL and PySpark skills for data transformation and validation Exposure to Python for automation and scripting Familiarity with DevOps concepts, CI/CD workflows, and source control systems (Azure DevOps preferred) Experience in working with Power BI or Tableau for data visualization and reporting support Strong problem-solving skills, attention to detail, and commitment to data quality Excellent communication and documentation skills to interface with technical and business teamsStrong knowledge of asset management business operations, especially in data domains like securities, holdings, benchmarks, and pricing. To qualify for the role, you must have 4–6 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Azure Databricks: Experience in data transformation and processing using notebooks and Spark. Azure Data Lake: Experience working with hierarchical data storage in Data Lake. Azure Synapse: Familiarity with distributed data querying and data warehousing. Azure Data factory: Hands-on experience in orchestrating and monitoring data pipelines. ETL Process Understanding: Knowledge of data extraction, transformation, and loading workflows, including data cleansing, mapping, and integration techniques. Good to have Power BI or Tableau for reporting support Monitoring/logging using Azure Monitor or Log Analytics Azure DevOps and Git for CI/CD and version control Python and/or PySpark for scripting and data handling Informatica Cloud Data Integration (CDI) or similar ETL tools Shell scripting or command-line data SQL (across distributed and relational databases) What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Grade Level (for internal use): 10 Market Intelligence The Role: Senior Full Stack Developer Grade level :10 The Team: You will work with a team of intelligent, ambitious, and hard-working software professionals. The team is responsible for the architecture, design, development, quality, and maintenance of the next-generation financial data web platform. Other responsibilities include transforming product requirements into technical design and implementation. You will be expected to participate in the design review process, write high-quality code, and work with a dedicated team of QA Analysts, and Infrastructure Teams The Impact: Market Intelligence is seeking a Software Developer to create software design, development, and maintenance for data processing applications. This person would be part of a development team that manages and supports the internal & external applications that is supporting the business portfolio. This role expects a candidate to handle any data processing, big data application development. We have teams made up of people that learn how to work effectively together while working with the larger group of developers on our platform. Whats in it for you: Opportunity to contribute to the development of a world-class Platform Engineering team . Engage in a highly technical, hands-on role designed to elevate team capabilities and foster continuous skill enhancement. Be part of a fast-paced, agile environment that processes massive volumes of dataideal for advancing your software development and data engineering expertise while working with a modern tech stack. Contribute to the development and support of Tier-1, business-critical applications that are central to operations. Gain exposure to and work with cutting-edge technologies including AWS Cloud , EMR and Apache NiFi . Grow your career within a globally distributed team , with clear opportunities for advancement and skill development. Responsibilities: Design and develop applications, components, and common services based on development models, languages and tools, including unit testing, performance testing and monitoring and implementation Support business and technology teams as necessary during design, development and delivery to ensure scalable and robust solutions Build data-intensive applications and services to support and enhance fundamental financials in appropriate technologies.( C#, .Net Core, Databricsk, Spark ,Python, Scala, NIFI , SQL) Build data modeling, achieve performance tuning and apply data architecture concepts Develop applications adhering to secure coding practices and industry-standard coding guidelines, ensuring compliance with security best practices (e.g., OWASP) and internal governance policies. Implement and maintain CI/CD pipelines to streamline build, test, and deployment processes; develop comprehensive unit test cases and ensure code quality Provide operations support to resolve issues proactively and with utmost urgency Effectively manage time and multiple tasks Communicate effectively, especially written with the business and other technical groups What Were Looking For: Basic Qualifications: BachelorsMasters Degree in Computer Science, Information Systems or equivalent. Minimum 5 to 8 years of strong hand-development experience in C#, .Net Core, Cloud Native, MS SQL Server backend development. Proficiency with Object Oriented Programming. Advance SQL programming skills Preferred experience or familiarity with tools and technologies such as Odata, Grafana, Kibana, Big Data platforms, Apache Kafka, GitHub, AWS EMR, Terraform, and emerging areas like AI/ML and GitHub Copilot. Highly recommended skillset in Databricks, SPARK, Scalatechnologies. Understanding of database performance tuning in large datasets Ability to manage multiple priorities efficiently and effectively within specific timeframes Excellent logical, analytical and communication skills are essential, with strong verbal and writing proficiencies Knowledge of Fundamentals, or financial industry highly preferred. Experience in conducting application design and code reviews Proficiency with following technologies: Object-oriented programming Programing Languages (C#, .Net Core) Cloud Computing Database systems (SQL, MS SQL) Nice to have: No-SQL (Databricks, Spark, Scala, python), Scripting (Bash, Scala, Perl, Powershell) Preferred Qualifications: Hands-on experience with cloud computing platforms including AWS , Azure , or Google Cloud Platform (GCP) . Proficient in working with Snowflake and Databricks for cloud-based data analytics and processing. Benefits: Health & Wellness: Health care coverage designed for the mind and body. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: Its not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awardssmall perks can make a big difference.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Description Are You Ready to Make It Happen at Mondelēz International? Join our Mission to Lead the Future of Snacking. Make It Uniquely Yours. You work with business and IT stakeholders to support a future-state vision in terms of requirements, principles and models in a specific technology, process or function. How You Will Contribute You will work closely with the enterprise architecture team to chart technology roadmaps, standards, best practices and guiding principles, providing your subject matter expertise and technical capabilities to oversee specific applications in architecture forums and when participating in workshops for business function requirements. In collaboration with the enterprise architecture and solution teams, you will help evaluate specific applications with a goal to bring in new capabilities based on the strategic roadmap. You will also deliver seamless integration with the existing ecosystem and support project teams in implementing these new technologies, offer input regarding decisions on key technology purchases to align IT investments with business strategies while reducing risk and participate in technology product evaluation processes and Architecture Review Board governance for project solutions. What You Will Bring A desire to drive your future and accelerate your career. You will bring experience and knowledge in: Defining and driving successful solution architecture strategy and standards Components of holistic enterprise architecture Teamwork, facilitation and negotiation Prioritizing and introducing new data sources and tools to drive digital innovation New information technologies and their application Problem solving, analysis and communication Governance, security, application life-cycle management and data privacy Purpose of Role The Solution Architect will provide end-to-end solution architecture guidance for data science initiatives. A successful candidate will be able to handle multiple projects at a time and drive the right technical architecture decisions for specific business problems. The candidate will also execute PoC/PoVs for emerging AI/ML technologies, support the strategic roadmap and define reusable patterns from which to govern project designs. Main Responsibilities: - Determine which technical architectures are appropriate for which models and solutions. Recommend what technologies and associated configurations are best to solve business problems. Define and document data science architectural patterns. Ensure project compliance with architecture guidelines and processes. Provide guidance and support to development teams during the implementation process. Develop and implement processes to execute AI/ML workloads. Configure and optimize the AI/ML systems for performance and scalability. Stay up to date on the latest AI/ML features and best practices. Integrating SAP BW with SAP S/4HANA and other data sources. Review and sign off on high-level architecture designs. Career Experiences Required & Role Implications Bachelor’s degree in computer science or related field of study. 10+ years of experience in a global company in data-related roles (5+ years in data science). Strong proficiency in Databricks and analytical application frameworks (Dash, Shiny, React). Experience with data engineering using common frameworks (Python, Spark, distributed SQL, NoSQL). Experience leading complex solution designs in a multi-cloud environment. Experience with a variety of analytics techniques: statistics, machine learning, optimization, and simulation. Experience with software engineering practices and tools (design, testing, source code management, CI/CD). Deep understanding of algorithms and tools for developing efficient data processing systems. Within Country Relocation support available and for candidates voluntarily moving internationally some minimal support is offered through our Volunteer International Transfer Policy Business Unit Summary At Mondelēz International, our purpose is to empower people to snack right by offering the right snack, for the right moment, made the right way. That means delivering a broad range of delicious, high-quality snacks that nourish life's moments, made with sustainable ingredients and packaging that consumers can feel good about. We have a rich portfolio of strong brands globally and locally including many household names such as Oreo , belVita and LU biscuits; Cadbury Dairy Milk , Milka and Toblerone chocolate; Sour Patch Kids candy and Trident gum. We are proud to hold the top position globally in biscuits, chocolate and candy and the second top position in gum. Our 80,000 makers and bakers are located in more than 80 countries and we sell our products in over 150 countries around the world. Our people are energized for growth and critical to us living our purpose and values. We are a diverse community that can make things happen—and happen fast. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Technology Architecture Technology & Digital Show more Show less

Posted 1 week ago

Apply

0.0 - 3.0 years

3 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role We are seeking a Associate Data Engineer to design, build, and maintain scalable data solutions that drive business insights. You will work with large datasets, cloud platforms (AWS preferred), and big data technologies to develop ETL pipelines, ensure data quality, and support data governance initiatives. Develop and maintain data pipelines, ETL/ELT processes, and data integration solutions . Design and implement data models, data dictionaries, and documentation for accuracy and consistency. Ensure data security, privacy, and governance standard processes. Use Databricks, Apache Spark (PySpark, SparkSQL), AWS, Redshift, for scalable data processing. Collaborate with cross-functional teams to understand data needs and deliver actionable insights. Optimize data pipeline performance and explore new tools for efficiency. Follow best practices in coding, testing, and infrastructure-as-code (CI/CD, version control, automated testing) . What we expect of you We are all different, yet we all use our unique contributions to serve patients. Strong problem-solving, critical thinking, and communication skills. Ability to collaborate effectively in a team setting. Proficiency in SQL, data analysis tools, and data visualization . Hands-on experience with big data technologies (Databricks, Apache Spark, AWS, Redshift ) . Experience with ETL tools, workflow orchestration, and performance tuning for big data . Basic Qualifications: Bachelors degree and 0 to 3 years of experience OR Diploma and 4 to 7 years of experience in Computer science, IT or related field. Preferred Qualifications: Knowledge of data modeling, warehousing, and graph databases Experience with Python, SageMaker, and cloud data platforms . AWS Certified Data Engineer or Databricks certification preferred.

Posted 1 week ago

Apply

3.0 - 4.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Reserving Data Analyst EXL (NASDAQ:EXLS) is a leading operations management and analytics company that helps businesses enhance growth and profitability in the face of relentless competition and continuous disruption. Using our proprietary, award-winning Business EXLerator Framework™, which integrates analytics, automation, benchmarking, BPO, consulting, industry best practices and technology platforms, we look deeper to help companies improve global operations, enhance data-driven insights, increase customer satisfaction, and manage risk and compliance. EXL serves the insurance, healthcare, banking and financial services, utilities, travel, transportation and logistics industries. Headquartered in New York, New York, EXL has more than 24,000 professionals in locations throughout the United States, Europe, Asia (primarily India and Philippines), Latin America, Australia and South Africa. EXL Analytics provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach. Leveraging proprietary methodology and best-of-breed technology, EXL Analytics takes an industry-specific approach to transform our clients’ decision making and embed analytics more deeply into their business processes. Our global footprint of nearly 2,000 data scientists and analysts assist client organizations with complex risk minimization methods, advanced marketing, pricing and CRM strategies, internal cost analysis, and cost and resource optimization within the organization. EXL Analytics serves the insurance, healthcare, banking, capital markets, utilities, retail and e-commerce, travel, transportation and logistics industries. Please visit www.exlservice.com for more information about EXL Analytics. Role & Responsibilities Overview Support in the creation and update of data definitions to allow for reserving data to be classed into risk-profiled subsets for actuarial analysis Coding in Databricks to encode claims datasets per their defined class codes for further analysis and reconciliation against multiple internal sources for credibility & data assurance Maintain current expense (Adjustment & Other expense) processes, while improving them to goal state levels Develop tracking tables for monitoring catastrophe claims for the client’s book of business Create and enhance templates that estimate and track key actuarial diagnostic metrics like AvE, PYD, Settlement rates etc. Support with the preparation of exhibits for regulatory filing Develop and maintain reserve models in tools like ResQ, Arius etc. Identify opportunities for seamless integration with downstream processes Assist in the development and enhancement of other actuarial tools, models, and processes to improve accuracy and efficiency. Provide support with the preparation of financial reports, recons and dashboards. Stay updated with best practices in actuarial and insurance terminology. Mentor and provide guidance to junior team members as needed. Candidate Profile Bachelor’s/Master's degree in engineering, economics, mathematics, actuarial sciences or statistics. Affiliation to IAI or IFOA, with 2-6 CT actuarial exams will be an added advantage 3-4 years Actuarial experience in the P&C insurance industry Good knowledge of insurance terms Advanced skills in Excel, Databricks, SQL, and other relevant tools for data analysis and modeling. Familiarity with reserving tools like ResQ, Arius is preferred Excellent analytical and problem-solving skills, with the ability to analyze complex data and make data-driven decisions. Strong communication skills, including the ability to effectively communicate actuarial concepts to both technical and non-technical stakeholders. Ability to work independently and collaboratively in a team-oriented environment. Detail-oriented with strong organizational and time management skills. Ability to adapt to changing priorities and deadlines in a fast-paced environment. What We Offer EXL Analytics offers an exciting, fast paced and innovative environment, which brings together a group of sharp and entrepreneurial professionals who are eager to influence business decisions. From your very first day, you get an opportunity to work closely with highly experienced, world class analytics consultants. You can expect to learn many aspects of businesses that our clients engage in. You will also learn effective teamwork and time-management skills - key aspects for personal and professional growth Analytics requires different skill sets at different levels within the organization. At EXL Analytics, we invest heavily in training you in all aspects of analytics as well as in leading analytical tools and techniques. We provide guidance/ coaching to every employee through our mentoring program wherein every junior level employee is assigned a senior level professional as advisors. Sky is the limit for our team members. The unique experiences gathered at EXL Analytics sets the stage for further growth and development in our company and beyond. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

SLSQ426R249 We are looking for a Sr Manager of sales, Commercial & Mid-Market to join our sales organization. You will oversee and motivate a team of Account Executives, and you will be measured by your team’s overall quota attainment, new business acquisition, and expansion into existing customer accounts. This is a team of motivated sellers, so we are looking for someone who is comfortable leading teams and loves developing Account Executives into rising stars. You will report directly to the India Director, Mid-Market & Commercial. The Impact You Will Have Promote Revenue success: Build and exceed your team's quarterly/ annual sales targets Build and Implement Strategic Plans: Develop and execute evolving revenue plans and growth tactics Create Trust–Based Relationships: Develop long-term customer, partner, and cross-functional relationships Distill Customer Needs and Value: Help your team to understand the commercial goals of your customer and how they relate to your value proposition Manage the Voice of Databricks: Lead your team to communicate the value proposition through proposals and presentations What We Look For 3+ years high-growth SaaS leader with experience leading a hybrid inside/field sales team 3+ years of experience translating a highly technical product into quantified business value for the C Suite Experience selling against open source or freemium Demonstrated track record delivering towards personal and team goals while to up level your process Develop and empower employees to achieve personal goals with a team-first mindset Coach and develop sales fundamentals to Account Executives, including and not limited to prospecting, forecasting, and negotiation tactic Excellent written and verbal communication skills and experience presenting to the C Suite About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We are looking for a Sourcer to help us grow our office in Bangalore. You will be the driving force behind finding and engaging with some of the finest talent in the industry as we scale our team from the ground up. This is an exciting time to be part of the Rocketship and the Databricks expansion in India. Sourcers at Databricks are subject matter experts when it comes to talent attraction and are given full autonomy to bring new ideas and creative approaches to life. You will work with hiring managers as a strategic partner, advising on sourcing strategies, passive engagement strategies, competitive intelligence, DEI, and talent insights. You will have an opportunity to work on multiple positions across Sales, Martketing, Solutions Architects (Pre-Sales), Data Engineering, and more! If you're passionate about recruiting and sourcing, enjoy using alternative techniques to find diverse talent pools, and relish the creative freedom to try new things, then apply below! The Impact You Will Have Drive talent acquisition efforts by extensively sourcing and engaging with highly skilled professionals in Bangalore. Contribute to the growth and success of Databricks in India by building a strong pipeline of top-tier candidates. Collaborate with the APJ recruiting team and hiring managers to understand the hiring needs and develop effective sourcing strategies. Ensure a positive candidate experience throughout the sourcing and evaluation process, acting as a brand ambassador for Databricks. Play a key role in building a diverse and inclusive workforce, promoting equal opportunity and representation in technical roles. What We Look For Proven experience as a Sourcer or similar role in the technology industry. Strong understanding of technical / non-tech roles, and the ability to effectively assess candidates' technical skills across all experience levels (Entry to Senior) Extremely hands-on with sourcing, leveraging different tools & platforms to source & recruit strong Tech & Non-Tech talent Expertise in hiring GTM professionals across domains like SaaS, Big Data, ML, AI etc is a plus Excellent communication and interpersonal skills, with the ability to engage candidates, build relationships, and effectively represent Databricks' values and culture. Familiarity with applicant tracking systems (ATS) and other recruitment software for efficient candidate management and tracking. Someone who thinks beyond the usual tools and creates innovative solutions to finding, engaging & hiring the very best people. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Position Summary USI Data Scientist, GenAi, Strategic Analytics – Data Science- Assistant Manager Deloitte is a leader in GenAI products and solutions.We are looking for a highly skilled and experienced Data Scientist to drive the ground-breaking GenAI products we are creating within Strategic Analytics.The ideal candidate will have a strong background in the design and implementation of GenAI models and agents, as well as an expertise in building and maintaining front-end UIs.Your work will be integral to deliver on some of the most important priorities happening at Deloitte.If you seek a role that focuses on cutting edge technology, delivers solutions for executive leadership, revolutionizes the future of Finance, and prioritizes professional growth, look no further. The team Strategic Analytics sits within our Finance FP&A organization.We support the firm and business executive leaders along with our counterparts in other financial and operational roles.Our team leverages cloud computing, AI & data science, strategic thinking, and deep firm knowledge to support leadership’s most important business decisions.Our insights are integral to driving growth into the future. GenAI is a top priority for the future of the firm.We are actively building products and solutions that will revolutionize our firm and our clients.The selected candidate would be directly involved in the success of these engagements. Specific responsibilities and qualifications for the Data Scientist role are outlined below. Work you’ll do Core responsibilities: Support the design, development, and implementation of GenAI models & agents. Create & maintain an intuitive and engaging UI/UX for analytics tools and applications. Execute timelines, milestones, and metrics to effectively plan and execute projects. Ensure the scalability, reliability, and performance of AI models and analytics platforms. Translate complex technical concepts and insights to non-technical stakeholders. Support and build presentations on our GenAI products & solutions for clients and leadership. Develop and implement best practices for AI model development, deployment, and monitoring. Other responsibilities: Collaborate with cross-functional teams to identify new GenAI opportunities to drive business value. Stay current with the latest advancements in GenAI, AI/ML, and UI/UX design. Automate and streamline projects to increase efficiencies and scalability. Support ad-hoc requests and investigations into the data & process that support our GenAI tools. Qualifications Required: Bachelor’s degree in AI / Data Science or a related subject Minimum of 5+ years of relevant experience Demonstrated accomplishments in the following areas: Advanced understanding of data science coding languages (Python, R, SQL). Experience developing GenAI solutions with LLMs (Llama, ChatGPT). Experience with cloud platforms (Databricks, Microsoft Azure). Experience with visualizations tools (Tableau, Power BI). Advanced in MS Office (Excel, PowerPoint, Outlook, Teams). Implementing complex AI/ML algorithms. Strong project management skills. Excellent problem-solving skills and the ability to think strategically. Strong leadership and team management skills. Excellent communication and interpersonal skills. Preferred: Advanced education degree. Certifications in GenAI, AI/ML, or UI/UX design. Experience with front-end development (HTML, CSS, JavaScript, React). Experience in the financial services or consulting industry. Business knowledge on financial and operational data. Location: Hyderabad Shift timing: How You’ll Grow At Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there is always room to learn. We offer opportunities to help sharpen skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Center. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. Recruiter tips We want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you are applying to. Check out recruiting tips from Deloitte professionals. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300601 Show more Show less

Posted 1 week ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role you will work closely with Amgen Research partners and Technology peers to ensure that they technology/ data needs for drug discovery research are translated into technical requirements for solution implementation. The role demonstrates scientific domain and business process expertise to detail product requirements as epics and user stories, along with supporting artifacts like business process maps, use cases, and test plans for the software development teams. Function as a Scientific Business Systems Analyst within a Scaled Agile Framework (SAFe) product team Serve as a liaison between global Research Informatics functional areas and global research scientists, prioritizing their needs and expectations Manage a suite of custom internal platforms, commercial off-the-shelf (COTS) software, and systems integrations Lead the Large Molecule Discovery technology ecosystem and ensure that the platform meets the requirements for data analysis and data integrity Ensure scientific data operations are scoped into building Research-wide Artificial Intelligence/Machine Learning capabilities Ensure operational excellence, cybersecurity and compliance. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The delivery team to estimate, plan, and commit to delivery with high confidence and identify test cases and scenarios to ensure the quality and performance of IT Systems. Basic Qualifications: Masters degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelors degree with 6- 8years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications: 5+ years of experience in implementing and supporting biopharma scientific software platforms. Functional Skills: Must-Have Skills: Proven expertise in a scientific domain area and related technology needs Experience with writing user requirements and acceptance criteria in agile project management systems such as JIRA Experience in configuration and administration of LIMS/ELN platforms (e.g. Benchling), Discovery software tools (e.g. Geneious, Genedata Screener) and Instrument Automation and Analysis platforms Experience using platforms such as Spotfire, Tableau, Power BI, etc., to build dashboards and reports and understanding of basic data querying using SQL, Databricks, etc. Good-to-Have Skills: Experience leading the implementation of scientific software platforms, Electronic Lab Notebook (ELN), or Laboratory Information Management Systems (LIMS) Knowledge of the antibody discovery design, make, test, and analyze cycle. Experience in AI and machine learning for drug discovery research and preclinical development Experience with leveraging LLM tools to accelerate software development processes. Experience with cloud (e.g. AWS) and on-premise infrastructure.

Posted 1 week ago

Apply

1.0 - 3.0 years

3 - 6 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role, you will be responsible for the end-to-end development of an enterprise analytics and data mastering solution using Databricks and Power BI. This role requires expertise in both data architecture and analytics, with the ability to create scalable, reliable, and impactful enterprise solutions that research cohort-building and advanced research pipeline. The ideal candidate will have experience creating and surfacing large unified repositories of human data, based on integrations from multiple repositories and solutions, and be extraordinarily skilled with data analysis and profiling. You will collaborate closely with key customers, product team members, and related IT teams, to design and implement data models, integrate data from various sources, and ensure best practices for data governance and security. The ideal candidate will have a good background in data warehousing, ETL, Databricks, Power BI, and enterprise data mastering. Design and build scalable enterprise analytics solutions using Databricks, Power BI, and other modern data tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Break down features into work that aligns with the architectural direction runway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentation from data analysis and profiling, and proposed designs and data logic Develop advanced sql queries to profile, and unify data Develop data processing code in sql, along with semantic views to prepare data for reporting Develop PowerBI Models and reporting packages Design robust data models, and processing layers, that support both analytical processing and operational reporting needs. Design and develop solutions based on best practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Develop and maintain Power BI solutions, ensuring data models and reports are optimized for performance and scalability. Collaborate with key customers to define data requirements, functional specifications, and project goals. Continuously evaluate and adopt new technologies and methodologies to enhance the architecture and performance of data solutions. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The R&D Data Catalyst Team is responsible for building Data Searching, Cohort Building, and Knowledge Management tools that provide the Amgen scientific community with visibility to Amgens wealth of human datasets, projects and study histories, and knowledge over various scientific findings. These solutions are pivotal tools in Amgens goal to accelerate the speed of discovery, and speed to market of advanced precision medications. Basic Qualifications: Masters degree and 1 to 3 years of Data Engineering experience OR Bachelors degree and 3 to 5 years of Data Engineering experience OR Diploma and 7 to 9 years of Data Engineering experience Must Have Skills: Minimum of 3 years of hands-on experience with BI solutions (Preferable Power BI or Business Objects) including report development, dashboard creation, and optimization. Minimum of 3 years of hands-on experience building Change-data-capture (CDC) ETL pipelines, data warehouse design and build, and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Deep understanding of Power BI, including model design, DAX, and Power Query. Proven experience designing and implementing data mastering solutions and data governance frameworks. Expertise in cloud platforms (AWS), data lakes, and data warehouses. Strong knowledge of ETL processes, data pipelines, and integration technologies. Good communication and collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional hands-on capabilities with data profiling, data transformation, data mastering Success in mentoring and training team members Good to Have Skills: ITIL Foundation or other relevant certifications (preferred) SAFe Agile Practitioner (6.0) Microsoft Certified: Data Analyst Associate (Power BI) or related certification. Databricks Certified Professional or similar certification. Soft Skills: Excellent analytical and troubleshooting skills Deep intellectual curiosity The highest degree of initiative and self-motivation Strong verbal and written communication skills, including presentation to varied audiences of complex technical/business topics Confidence technical leader Ability to work effectively with global, remote teams, specifically including using of tools and artifacts to assure clear and efficient collaboration across time zones Ability to handle multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem solving, analytical skills; Ability to learn quickly and retain and synthesize complex information from diverse sources.

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Role Description: We are looking for highly motivated expert Senior Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric. Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture. Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency. Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance. Ensure data security, compliance, and role-based access control (RBAC) across data environments. Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets. Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring. Implement data virtualization techniques to provide seamless access to data across multiple storage systems. Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals. Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architectures. Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures. Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps Education and Professional Certifications Masters degree and 3 to 4 + years of Computer Science, IT or related field experience OR Bachelors degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 1 week ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role you will. Role Description: The role is responsible for designing, developing, and maintaining software solutions for Research scientists. Additionally, it involves automating operations, monitoring system health, and responding to incidents to minimize downtime. You will join a multi-functional team of scientists and software professionals that enables technology and data capabilities to evaluate drug candidates and assess their abilities to affect the biology of drug targets. This team implements scientific software platforms that enable the capture, analysis, storage, and reporting for our Large Molecule Discovery Research team (Design, Make, Test and Analyze processes). The team also interfaces heavily with teams supporting our in vitro assay management systems and our compound inventory platforms. The ideal candidate possesses experience in the pharmaceutical or biotech industry, strong technical skills, and full stack software engineering experience (spanning SQL, back-end, front-end web technologies, automated testing). Roles & Responsibilities: Take ownership of complex software projects from conception to deployment Work closely with product team, business team including scientists, and other collaborators Analyze and understand the functional and technical requirements of applications, solutions and systems and translate them into software architecture and design specifications Design, develop, and implement applications and modules, including custom reports, interfaces, and enhancements Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software Conduct code reviews to ensure code quality and alignment to standard methodologies Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations Provide ongoing support and maintenance for applications, ensuring that they operate smoothly and efficiently Stay updated with the latest technology and security trends and advancements What we expect of you We are all different, yet we all use our unique contributions to serve patients. The professional we seek is a [type of person] with these qualifications. Basic Qualifications: Doctorate Degree OR Masters degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelors degree with 6 - 8 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/ Bioinformatics or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/ Bioinformatics or related field Preferred Qualifications and Experience: 3+ years of experience in implementing and supporting biopharma scientific software platforms Some experience with ML or generative AI technologies Proficient in Java or Python Proficient in at least one JavaScript UI Framework (e.g. ExtJS, React, or Angular) Proficient in SQL (e.g. Oracle, PostgreSQL, Databricks) Experience with event-based architecture and serverless AWS services such as EventBridge, SQS, Lambda or ECS. Preferred Qualifications: Experience with Benchling Hands-on experience with Full Stack software development Strong understanding of software development methodologies, mainly Agile and Scrum Working experience with DevOps practices and CI/CD pipelines Experience of infrastructure as code (IaC) tools (Terraform, CloudFormation) Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk) Experience with automated testing tools and frameworks Experience with big data technologies (e.g., Spark, Databricks, Kafka) Experience with leveraging the use of AI-assistants (e.g. GitHub Copilot) to accelerate software development and improve code quality Professional Certifications (please mention if the certification is preferred or mandatory for the role): AWS Certified Cloud Practitioner preferred Soft Skills: Excellent problem solving, analytical, and troubleshooting skills Strong communication and interpersonal skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to learn quickly & work independently Team-oriented, with a focus on achieving team goals Ability to manage multiple priorities successfully Strong presentation and public speaking skills.

Posted 1 week ago

Apply

0.0 - 2.0 years

2 - 4 Lacs

Hyderabad

Work from Office

Naukri logo

Role Description: We are looking for an Associate Data Engineer with deep expertise in writing data pipelines to build scalable, high-performance data solutions. The ideal candidate will be responsible for developing, optimizing and maintaining complex data pipelines, integration frameworks, and metadata-driven architectures that enable seamless access and analytics. This role prefers deep understanding of the big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Data Engineer who owns development of complex ETL/ELT data pipelines to process large-scale datasets Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Exploring and implementing new tools and technologies to enhance ETL platform and performance of the pipelines Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Eager to understand the biotech/pharma domains & build highly efficient data pipelines to migrate and deploy complex data across systems Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Experience in Data Engineering with a focus on Databricks, AWS, Python, SQL, and Scaled Agile methodologies Proficiency & Strong understanding of data processing and transformation of big data frameworks (Databricks, Apache Spark, Delta Lake, and distributed computing concepts) Strong understanding of AWS services and can demonstrate the same Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery, and DevOps practices Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Exposure to APIs, full stack development Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Bachelors degree and 2 to 5 + years of Computer Science, IT or related field experience OR Masters degree and 1 to 4 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 1 week ago

Apply

1.0 - 4.0 years

3 - 6 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking an MDM Associate Data Engineer with 2 5 years of experience to support and enhance our enterprise MDM (Master Data Management) platforms using Informatica/Reltio. This role is critical in delivering high-quality master data solutions across the organization, utilizing modern tools like Databricks and AWS to drive insights and ensure data reliability. The ideal candidate will have strong SQL, data profiling, and experience working with cross-functional teams in a pharma environment. To succeed in this role, the candidate must have strong data engineering experience along with MDM knowledge, hence the candidates having only MDM experience are not eligible for this role. Candidate must have data engineering experience on technologies like (SQL, Python, PySpark , Databricks, AWS etc ), along with knowledge of MDM (Master Data Management) Roles & Responsibilities: Analyze and manage customer master data using Reltio or Informatica MDM solutions. Perform advanced SQL queries and data analysis to validate and ensure master data integrity. Leverage Python, PySpark, and Databricks for scalable data processing and automation. Collaborate with business and data engineering teams for continuous improvement in MDM solutions. Implement data stewardship processes and workflows, including approval and DCR mechanisms. Utilize AWS cloud services for data storage and compute processes related to MDM. Contribute to metadata and data modeling activities. Track and manage data issues using tools such as JIRA and document processes in Confluence. Apply Life Sciences/Pharma industry context to ensure data standards and compliance. Basic Qualifications and Experience: Masters degree with 1 - 3 years of experience in Business, Engineering, IT or related field OR Bachelors degree with 2 - 5 years of experience in Business, Engineering, IT or related field OR Diploma with 6 - 8 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Advanced SQL expertise and data wrangling. Strong experience in Python and PySpark for data transformation workflows. Strong experience with Databricks and AWS architecture. Must have knowledge of MDM, data governance, stewardship, and profiling practices. In addition to above, candidates having experience with Informatica or Reltio MDM platforms will be preferred. Good-to-Have Skills: Experience with IDQ, data modeling and approval workflow/DCR. Background in Life Sciences/Pharma industries. Familiarity with project tools like JIRA and Confluence. Strong grip on data engineering concepts. Professional Certifications : Any ETL certification (e.g. Informatica) Any Data Analysis certification (SQL, Python, Databricks) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams

Posted 1 week ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Mumbai, Maharastra

Work from Office

Naukri logo

Grade Level (for internal use) : - 10 The Team You will be an expert contributor and part of the Rating Organizations Data Services Product Engineering Team This team, who has a broad and expert knowledge on Ratings organizations critical data domains, technology stacks and architectural patterns, fosters knowledge sharing and collaboration that results in a unified strategy All Data Services team members provide leadership, innovation, timely delivery, and the ability to articulate business value Be a part of a unique opportunity to build and evolve S&P Ratings next gen analytics platform Responsibilities: Design and implement innovative software solutions to enhance S&P Ratings' cloud-based data platforms. Mentor a team of engineers fostering a culture of trust, continuous growth, and collaborative problem-solving. Collaborate with business partners to understand requirements, ensuring technical solutions align with business goals. Manage and improve existing software solutions, ensuring high performance and scalability. Participate actively in all Agile scrum ceremonies, contributing to the continuous improvement of team processes. Produce comprehensive technical design documents and conduct technical walkthroughs. Experience & Qualifications: Bachelors degree in computer science, Information Systems, Engineering, equivalent or more is required Proficient with software development lifecycle (SDLC) methodologies like Agile, Test-driven development 7+ years of development experience in enterprise products, modern web development technologies Java/J2EE, UI frameworks like Angular, React, SQL, Oracle, NoSQL Databases like MongoDB Experience designing transactional/data warehouse/data lake and data integrations with Big data eco system leveraging AWS cloud technologies Exp. with Delta Lake systems like Databricks using AWS cloud technologies and PySpark is a plus Thorough understanding of distributed computing Passionate, smart, and articulate developer Quality first mindset with a strong background and experience with developing products for a global audience at scale Excellent analytical thinking, interpersonal, oral and written communication skills with strong ability to influence both IT and business partners Superior knowledge of system architecture, object-oriented design, and design patterns. Good work ethic, self-starter, and results-oriented Excellent communication skills are essential, with strong verbal and writing proficiencies Additional Preferred Qualifications: Experience working AWS Experience with SAFe Agile Framework Bachelor's/PG degree in Computer Science, Information Systems or equivalent. Hands-on experience contributing to application architecture & designs, proven software/enterprise integration design principles Ability to prioritize and manage work to critical project timelines in a fast-paced environment Excellent Analytical and communication skills are essential, with strong verbal and writing proficiencies Ability to train and mentor Benefits: Health & Wellness: Health care coverage designed for the mind and body. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: Its not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awardssmall perks can make a big difference.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

POSITION / TITLE: Data Science Lead Location: Offshore – Hyderabad/Bangalore/Pune Who are we looking for? Looking for individuals with 6+ years of experience implementing and managing Data science projects . Working knowledge of Machine and Deep learning based client projects, MVPs, and POCs. Should have expert level experience with machine learning frameworks like scikit-learn, tensorflow, keras and deeplearning architectures like RNNs and LSTM. Should have worked with cognitive services from major cloud platforms like AWS (Textract, Comprehend) or Azure cognitive services etc. and have a working knowledge of SQL and no-SQL databases and microservices. Should be adapt at Python Scripting. Experience on NLP and Text Analytics is preferred Responsibilities Technical Skills – Must have:  Knowledge of Natural Language Processing(NLP)techniques and frameworks like Spacy, NLTK, etc. and good knowledge of Text Analytics  Should have strong understanding & hands on experience with machine learning frameworks like scikit-learn, tensorflow, keras and deep learning architectures like RNNs and LSTM , BERT  Should have worked with cognitive services from major cloud platforms like AWS and have a working knowledge of SQL and no-SQL databases.  Ability to create data and ML pipelines for more efficient and repeatable data science projects using MLOps principles  Keep abreast with new tools, algorithms and techniques in machine learning and works to implement them in the organization  Strong understanding of evaluation and monitoring metrics for machine learning projects  Strong understanding of containerization using docker and Kubernetes to get the models into production  Ability to Translate complex machine learning problem statements into specific deliverables and requirements  Adept at Python Scripting Technical Skills – Good To Have  Knowledge of distributed computing frameworks and cloud ML frameworks including AWS.  Experience in natural language processing, computer vision, or deep learning.  Certifications or courses in data science, analytics, or related fields.  Should exhibit diligence and meticulousness in working with data Other Skills We'd Appreciate  4+ years of experience in the Data Science and Machine Learning techniques  Proven track record of getting ML models into production  Hands-on experience with writing ML models with Python.  Prior experience in ML platforms and tools such as Dataiku, DataBricks, etc. would be a plus Education Qualification  Bachelor's degree in Computer Science, Information Technology, or related field (Master's degree preferred). Process Skills  General SDLC processes  Understanding of utilizing Agile and Scrum software development methodologies  Skill in gathering and documenting user requirements and writing technical specifications. Behavioral Skills  Good Attitude and Quick learner.  Well-developed design, analytical & problem-solving skills  Strong oral and written communication skills  Excellent team player, able to work with virtual teams.  Self-motivated and capable of working independently with minimal management supervision. Certification  Having Machine Learning or AI certifications would be an added advantage. Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Linkedin logo

We at MakeMyTrip understand that every traveller is unique and being the leading OTA in India we have the leverage to redefine the travel booking experience to meet their need. If you love to travel and want to be a part of a dynamic team that works on personalizing every user's journey, then look no further. We are looking for a brilliant mind like yours to join our Data Platform team to build exciting data products at a scale where we solve for industry best and fault-tolerant feature stores, real-time data pipelines, catalogs, and much more. Hands-on: Spark, Scala Technologies: Spark, Aerospike, DataBricks, Kafka, Debezium, EMR, Athena, Glue, RocksDB, Redis, Airflow, MySQL, and any other data sources (e.g. Mongo, Neo4J, etc) used by other teams. Location: Gurgaon/Bengaluru Experience: 6+ years Industry Preference: E-Commerce Show more Show less

Posted 1 week ago

Apply

Exploring Databricks Jobs in India

Databricks is a popular technology in the field of big data and analytics, and the job market for Databricks professionals in India is growing rapidly. Companies across various industries are actively looking for skilled individuals with expertise in Databricks to help them harness the power of data. If you are considering a career in Databricks, here is a detailed guide to help you navigate the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for Databricks professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum

Career Path

In the field of Databricks, a typical career path may include: - Junior Developer - Senior Developer - Tech Lead - Architect

Related Skills

In addition to Databricks expertise, other skills that are often expected or helpful alongside Databricks include: - Apache Spark - Python/Scala programming - Data modeling - SQL - Data visualization tools

Interview Questions

  • What is Databricks and how is it different from Apache Spark? (basic)
  • Explain the concept of lazy evaluation in Databricks. (medium)
  • How do you optimize performance in Databricks? (advanced)
  • What are the different cluster modes in Databricks? (basic)
  • How do you handle data skewness in Databricks? (medium)
  • Explain how you can schedule jobs in Databricks. (medium)
  • What is the significance of Delta Lake in Databricks? (advanced)
  • How do you handle schema evolution in Databricks? (medium)
  • What are the different file formats supported by Databricks for reading and writing data? (basic)
  • Explain the concept of checkpointing in Databricks. (medium)
  • How do you troubleshoot performance issues in Databricks? (advanced)
  • What are the key components of Databricks Runtime? (basic)
  • How can you secure your data in Databricks? (medium)
  • Explain the role of MLflow in Databricks. (advanced)
  • How do you handle streaming data in Databricks? (medium)
  • What is the difference between Databricks Community Edition and Databricks Workspace? (basic)
  • How do you set up monitoring and alerting in Databricks? (medium)
  • Explain the concept of Delta caching in Databricks. (advanced)
  • How do you handle schema enforcement in Databricks? (medium)
  • What are the common challenges faced in Databricks projects and how do you overcome them? (advanced)
  • How do you perform ETL operations in Databricks? (medium)
  • Explain the concept of MLflow Tracking in Databricks. (advanced)
  • How do you handle data lineage in Databricks? (medium)
  • What are the best practices for data governance in Databricks? (advanced)

Closing Remark

As you prepare for Databricks job interviews, make sure to brush up on your technical skills, stay updated with the latest trends in the field, and showcase your problem-solving abilities. With the right preparation and confidence, you can land your dream job in the exciting world of Databricks in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies