Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Gurgaon/Bangalore, India AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable - enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained industrious advantage. Our Chief Data Office also known as our Innovation, Data Intelligence & Analytics team (IDA) is focused on driving innovation through optimizing how we leverage data to drive strategy and create a new business model - disrupting the insurance market. As we develop an enterprise-wide data and digital strategy that moves us toward greater focus on the use of data and data-driven insights, we are seeking an Data Engineer. The role will support the team’s efforts towards creating, enhancing, and stabilizing the Enterprise data lake through the development of the data pipelines. This role requires a person who is a team player and can work well with team members from other disciplines to deliver data in an efficient and strategic manner . What You’ll Be DOING What will your essential responsibilities include? Act as a data engineering expert and partner to Global Technology and data consumers in controlling complexity and cost of the data platform, whilst enabling performance, governance, and maintainability of the estate. Understand current and future data consumption patterns, architecture (granular level), partner with Architects to ensure optimal design of data layers. Apply best practices in Data architecture. For example, balance between materialization and virtualization, optimal level of de-normalization, caching and partitioning strategies, choice of storage and querying technology, performance tuning. Leading and hands-on execution of research into new technologies. Formulating frameworks for assessment of new technology vs business benefit, implications for data consumers. Act as a best practice expert, blueprint creator of ways of working such as testing, logging, CI/CD, observability, release, enabling rapid growth in data inventory and utilization of Data Science Platform. Design prototypes and work in a fast-paced iterative solution delivery model. Design, Develop and maintain ETL pipelines using Pyspark in Azure Databricks using delta tables. Use Harness for deployment pipeline. Monitor Performance of ETL Jobs, resolve any issue that arose and improve the performance metrics as needed. Diagnose system performance issue related to data processing and implement solution to address them. Collaborate with other teams to ensure successful integration of data pipelines into larger system architecture requirement. Maintain integrity and quality across all pipelines and environments. Understand and follow secure coding practice to make sure code is not vulnerable. You will report to Technical Lead. What You Will BRING We’re looking for someone who has these abilities and skills: Required Skills And Abilities Effective Communication skills. Bachelor’s degree in computer science, Mathematics, Statistics, Finance, related technical field, or equivalent work experience. Relevant years of extensive work experience in various data engineering & modeling techniques (relational, data warehouse, semi-structured, etc.), application development, advanced data querying skills. Relevant years of programming experience using Databricks. Relevant years of experience using Microsoft Azure suite of products(ADF, synapse and ADLS). Solid knowledge on network and firewall concepts. Solid experience writing, optimizing and analyzing SQL. Relevant years of experience with Python. Ability to break complex data requirements and architect solutions into achievable targets. Robust familiarity with Software Development Life Cycle (SDLC) processes and workflow, especially Agile. Experience using Harness. Technical lead responsible for both individual and team deliveries. Desired Skills And Abilities Worked in big data migration projects. Worked on performance tuning both at database and big data platforms. Ability to interpret complex data requirements and architect solutions. Distinctive problem-solving and analytical skills combined with robust business acumen. Excellent basics on parquet files and delta files. Effective Knowledge of Azure cloud computing platform. Familiarity with Reporting software - Power BI is a plus. Familiarity with DBT is a plus. Passion for data and experience working within a data-driven organization. You care about what you do, and what we do. Who WE are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and enables business growth and is critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most inclusive workforce possible, and create a culture where everyone can bring their full selves to work and reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe. Robust support for Flexible Working Arrangements Enhanced family-friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides competitive compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far-reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability. Show more Show less
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. What You’ll Do An entry level associate consultant will be responsible for implementing data warehousing and reporting solutions based on SAP Analytics or Data Management products like SAP Business Data Cloud (SAP BDC), SAP Datasphere, SAP HANA, or SAP Analytics Cloud (SAC). Interact with the lead to understand requirement and build objects as per given design. Responsible for technical implementation of the business models, replication of data from source systems and creating dashboards or other visualizations using one of the above SAP solutions. Understand defined project methodologies and follow them during execution. Understand technical design and build the objects as per the design using SAP products. Work as part of large team under guidance from a technical lead. Flexibility to work on various skills and proactive approach to take ownership. Reliability to deliver on time and with high quality. What You Bring Working experience of 0-3 years in software industry with a degree in computer science, engineering, accounting, finance or related field. Hands-on Experience in one of the below skills: SAP Analytics portfolio (SAC, HANA Modelling, BW, Datasphere) Experience in SAP Data Management or other Data Management Tools ABAP Development experience. Databricks AI Experience BTP Application Development Experience Node.js Experience. Experience working on simple business scenarios and implement it with a given design. Experience of coding and database design. Understanding of database objects like tables, schemas, views and ER models. Knowledge of analytics concepts, data warehousing or any reporting or dashboarding. Advanced knowledge of SQL scripting with ability to write stored procedures. Knowledge of SAP Analytics and/or Data Management portfolio (e.g. SAC, HANA, BW, Datasphere etc.) or knowledge of ABAP development will be preferred Good grasp of programming concepts and knowledge in JavaScript, Python Ability to understand business requirements, derive logic and implement it technically either as code or as a model. Good problem solving and troubleshooting skills. Good oral and written communication and interpersonal skills Location: Bangalore or Gurgaon. Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 423048 | Work Area: Consulting and Professional Services | Expected Travel: 0 - 10% | Career Status: Graduate | Employment Type: Regular Full Time | Additional Locations: . Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Associate Specialist - Data Delivery & Operations Gurgaon/Bangalore, India AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable - enabling AXA XL's executive leadership team to maximize benefits and facilitate sustained enterprise advantage. Our Innovation, Data, and Analytics Office (IDA) is focused on driving innovation by optimizing how we leverage data to drive strategy and create a new business model - disrupting the insurance market. As we develop an enterprise-wide data and digital strategy that moves us toward a greater focus on the use of data and data-driven insights, we are seeking an Associate Scientist for our Data Sourcing & Solutions team. The role sits across the IDA Department to make sure customer requirements are properly captured and transformed into actionable data specifications. Success in the role will require a focus on proactive management of the sourcing and management of data from source through usage. What You’ll Be DOING What will your essential responsibilities include? Accountable for documenting data requirements (Business and Function Requirements) and assessing the reusability of Axiom assets. Build processes to simplify and expedite data sourcing to focus on delivering data to AXA XL business stakeholders frequently. Develops and operationalizes strategic data products and answers and proactively manages the sourcing and management of data from source through usage (reusable Policy and Claim Domain data assets). Data Validation Testing of the data products in partnership with the AXA XL business to ensure the accuracy of the data and validation of the requirements. Assesses all data required as part of the Data Ecosystem to make sure data has a single version of the truth. Respond to ad-hoc data requests to support AXA XL's business. Instill a customer-first attitude, prioritizing service for our business stakeholders above all else. Internalize and execute IDA and company-wide goals to become a data-driven organization. Contribute to best practices and standards to make sure there is a consistent and efficient approach to capturing business requirements and translating them into functional, non-functional, and semantic specifications. Develop a comprehensive understanding of the data and our customers. Drive root cause analysis for identified data deficiencies within reusable data assets delivered via IDA. Identify solution options to improve the consistency, accuracy, and quality of data when captured at its source. You will report to the Senior Scientist- Data Sourcing & Delivery & Operations. What You Will BRING We’re looking for someone who has these abilities and skills: Required Skills And Abilities A minimum of a bachelor’s or master's degree (preferred) in a relevant discipline. Experience in a data role (business analyst, data analyst, analytics) preferably in the Insurance industry and within a data division. Robust SQL knowledge and technical ability to query AXA XL data sources to understand our data. Excellent presentation, communication (oral & written), and relationship-building skills, across all levels of management and customer interaction. Insurance experience in data, underwriting, claims, and/or operations, including influencing, collaborating, and leading efforts in complex, disparate, and interrelated teams with competing priorities. Passion for data and experience working within a data-driven organization. Work together internal data with external industry data to deliver holistic answers. Work with unstructured data to unlock information needed by the business to create unique products for the insurance industry. Possesses robust exploratory analysis skills and high intellectual curiosity. Displays exceptional organizational skills and is detail oriented. The robust conceptual thinker who 'connects dots', and has critical thinking, and analytical skills. Desired Skills And Abilities Ability to work with team members across the globe and departments. Ability to take ownership, work under pressure, and meet deadlines. Builds trust and rapport within and across groups. Applies in-depth knowledge of business and specialized areas to solve business problems and understand integration challenges and long-term impact creatively and strategically. Ability to manage data needs of an individual project(s) while being able to understand the broader enterprise data perspective. Expected to recommend innovation and improvement to policies, and procedures, deploying resources, and performing core activities. Experience with SQL Server, Azure Databricks Notebook, QlikView, Power BI, and Jira/Confluence a plus. Who WE are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and enables business growth and is critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most inclusive workforce possible, and create a culture where everyone can bring their full selves to work and reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe. Robust support for Flexible Working Arrangements Enhanced family-friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides competitive compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far-reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability. Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience. Your Role And Responsibilities Identifying business problems, understand the customer issue and fix the issue. Evaluating reoccurring issues and work on for permanent solution Focus on service improvement. Troubleshooting technical issues and design flaws Working both individually and on a team to deliver work on time Preferred Education Master's Degree Required Technical And Professional Expertise BE / B Tech in any stream, M.Sc. (Computer Science/IT) / M.C.A, with Minimum 3-5 years of experience Azure IAAS, PASS & SAAS services expert and Handson experience on these and below all services. VM, Storage account, Load Balancer, Application Gateway, VNET, Route Table, Azure Bastion, Disaster Recovery, Backup, NSG, Azure update manager, Key Vault etc. Azure Web App, Function App, Logic App, AKS (Azure Kubernetes Service) & containerization, Docker, Event Hub, Redis Cache, Service Mess and ISTIO, App insight, Databricks, AD, DNS, Log Analytic Workspace, ARO (Azure Red Openshift) Orchestration & Containerization Docker, Kubernetes, RedHat OpenShift Security Management - Firewall Mgmt, FortiGate firewall Preferred Technical And Professional Experience Monitoring through Cloud Native tools (CloudWatch, Cloud Trail, Azure Monitor, Activity Log, VRops and LogInsight) Server monitoring and Management (Windows, Linux, AIX AWS Linux, Ubuntu Linux) Storage Monitoring and Management (Blob, s3, EBS, Backups, recovery, Snapshots) Show more Show less
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Creating Passion: Your Responsibilities Design, implementation and optimization of ETL and data pipelines for business intelligence and analytics applications. Responsibility for monitoring and troubleshooting existing jobs and job control. Design and development of data quality checks to ensure data integrity. Developing and maintaining comprehensive documentation. Collaborating with data analysts, data scientists and business stakeholders in an international context. Contributing your strengths: Your qualifications Qualification and Education Requirements: Bachelor's degree in Computer Science, Data Science, or a related field. Strong conceptual and analytical skills, with a high level of quality awareness. Proven ability to work independently and collaboratively within a globally distributed team. Experience in data engineering and data warehousing, including practical implementation and management of data pipelines. Excellent communication and collaboration skills. Experience: 5-9 years of experience implementing and managing ETL and data pipelines. Preferred Skills / Special Skills: Extensive experience and proficiency in Microsoft SQL Server and SSIS (mandatory). Knowledge of development with Databricks, Azure Data Lake, and Python is advantageous; experience with cloud technologies and big data environments is a plus. Have we awoken your interest? Then we look forward to receiving your online application. If you have any questions, please contact Sonali Samal. One Passion. Many Opportunities. The Company Liebherr CMCtec India Private Limited in Pune (India) was established in 2008 and started its manufacturing plant in its own facility on Pune Solapur Highway in 2012. The company is responsible for the production of tower cranes and drives. Location Liebherr CMCtec India Private Limited Sai Radhe, 6th Floor, Behind Hotel Sheraton Grand, Raja Bahadur Mill Road, Pune, 411001 411001 Pune India (IN) Contact Sonali Samal sonali.samal@liebherr.com Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Cloud Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Managers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 10- 15 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
India
On-site
Currently we have an open position with our client - Its a IT Consulting Firm Principal Databricks Engineer/Architect. Key Responsibilities: 1. Databricks Solution Architecture: Design and implement scalable, secure, and efficient Databricks solutions that meet client requirements. 2. Data Engineering: Develop data pipelines, architect data lakes, and implement data warehousing solutions using Databricks. 3. Data Analytics: Collaborate with data scientists and analysts to develop and deploy machine learning models and analytics solutions on Databricks. 4. Performance Optimization: Optimize Databricks cluster performance, ensuring efficient resource utilization and cost-effectiveness. 5. Security and Governance: Implement Databricks security features, ensure data governance, and maintain compliance with industry regulations. 6. Client Engagement: Work closely with clients to understand their business requirements, provide technical guidance, and deliver high-quality Databricks solutions. 7. Thought Leadership: Stay up-to-date with the latest Databricks features, best practices, and industry trends, and share knowledge with the team. Requirements: 1. Databricks Experience: 5+ years of experience working with Databricks, including platform architecture, data engineering, and data analytics. 2. Technical Skills: Proficiency in languages such as Python, Scala, or Java, and experience with Databricks APIs, Spark, and Delta Lake. 3. Data Engineering: Strong background in data engineering, including data warehousing, ETL, and data governance. 4. Leadership: Proven experience leading technical teams, mentoring junior engineers, and driving technical initiatives. 5. Communication: Excellent communication and interpersonal skills, with the ability to work effectively with clients and internal stakeholders. Good to Have: 1. Certifications: Databricks Certified Professional or similar certifications. 2. Cloud Experience: Experience working with cloud platforms such as AWS, Azure, or GCP. 3. Machine Learning: Knowledge of machine learning concepts and experience with popular ML libraries. Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Cloud Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Managers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 10- 15 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Cloud Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Managers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 10- 15 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
Remote
Role: Senior Data Engineer with Databricks. Experience: 5+ Years Job Type: Contract Contract Duration: 6 Months Budget: 1.0 lakh per month Location : Remote JOB DESCRIPTION: We are looking for a dynamic and experienced Senior Data Engineer – Databricks to design, build, and optimize robust data pipelines using the Databricks Lakehouse platform. The ideal candidate should have strong hands-on skills in Apache Spark, PySpark, cloud data services, and a good grasp of Python and Java. This role involves close collaboration with architects, analysts, and developers to deliver scalable and high-performing data solutions across AWS, Azure, and GCP. ESSENTIAL JOB FUNCTIONS 1. Data Pipeline Development • Build scalable and efficient ETL/ELT workflows using Databricks and Spark for both batch and streaming data. • Leverage Delta Lake and Unity Catalog for structured data management and governance. • Optimize Spark jobs by tuning configurations, caching, partitioning, and serialization techniques. 2. Cloud-Based Implementation • Develop and deploy data workflows onAWS (S3, EMR,Glue), Azure (ADLS, ADF, Synapse), and/orGCP (GCS, Dataflow, BigQuery). • Manage and optimize data storage, access control, and pipeline orchestration using native cloud tools. • Use tools like Databricks Auto Loader and SQL Warehousing for efficient data ingestion and querying. 3. Programming & Automation • Write clean, reusable, and production-grade code in Python and Java. • Automate workflows using orchestration tools(e.g., Airflow, ADF, or Cloud Composer). • Implement robust testing, logging, and monitoring mechanisms for data pipelines. 4. Collaboration & Support • Collaborate with data analysts, data scientists, and business users to meet evolving data needs. • Support production workflows, troubleshoot failures, and resolve performance bottlenecks. • Document solutions, maintain version control, and follow Agile/Scrum processes Required Skills Technical Skills: • Databricks: Hands-on experience with notebooks, cluster management, Delta Lake, Unity Catalog, and job orchestration. • Spark: Expertise in Spark transformations, joins, window functions, and performance tuning. • Programming: Strong in PySpark and Java, with experience in data validation and error handling. • Cloud Services: Good understanding of AWS, Azure, or GCP data services and security models. • DevOps/Tools: Familiarity with Git, CI/CD, Docker (preferred), and data monitoring tools. Experience: • 5–8 years of data engineering or backend development experience. • Minimum 1–2 years of hands-on work in Databricks with Spark. • Exposure to large-scale data migration, processing, or analytics projects. Certifications (nice to have): Databricks Certified Data Engineer Associate Working Conditions Hours of work - Full-time hours; Flexibility for remote work with ensuring availability during US Timings. Overtime expectations - Overtime may not be required as long as the commitment is accomplished Work environment - Primarily remote; occasional on-site work may be needed only during client visit. Travel requirements - No travel required. On-call responsibilities - On-call duties during deployment phases. Special conditions or requirements - Not Applicable. Workplace Policies and Agreements Confidentiality Agreement: Required to safeguard client sensitive data. Non-Compete Agreement: Must be signed to ensure proprietary model security. Non-Disclosure Agreement: Must be signed to ensure client confidentiality and security. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
Data Engineering Specialist (11788) Investec is a distinctive Specialist Bank serving clients principally in the UK and South Africa. Our culture gives us our edge: we work hard to find colleagues who'll think out of the ordinary and we put them in environments where they'll flourish. We combine a flat structure with a focus on internal mobility. If you can bring an entrepreneurial spirit and a desire to learn and collaborate to your work, this could be the boost your career deserves. Team Description The Offshore Data Engineering Lead will be responsible for overseeing the data and application development efforts which support our Microsoft Data Mesh Platform. Working as a part of the Investec Central Data Team, the candidate will be responsible for leading development on solutions and applications that support our data domain teams with creation of data products. This role involves driving technical initiatives, exploring new technologies, and enhancing engineering practices within the data teams in-line with the group engineering strategy. The Data Engineering Lead will be a key driver for Investec's move to Microsoft Fabric and other enablement data quality, data management and data orchestration technologies. Key Roles And Responsibilities Lead the development and implementation of data and custom application solutions that support the creation of data products across various data domains. Design, build, and maintain data pipelines using Microsoft Azure Data Platform, Microsoft Fabric and Databricks technologies. Ensure data quality, integrity, and security within the data mesh architecture. Share group engineering context with the CIO and engineers within the business unit continuously. Drive engineering efficiency and enable teams to deliver high-quality software quickly within the business unit Cultivate a culture focused on security, risk management, and best practices in engineering Actively engage with the data domain teams, business units and wider engineering community to promote knowledge sharing Spearhead technical projects and innovation within the business unit's engineering teams and contribute to group engineering initiatives Advance the technical skills of the engineering community and mentor engineers within the business unit Enhance the stability, performance, and security of the business unit's systems. Develop and promote exceptional engineering documentation and practices Build a culture of development and mentorship within the central data team Provide guidance on technology and engineering practices Actively encourages creating Investec open-source software where appropriate within the business unit Actively encourages team members within the business unit to speak at technical conferences based on the work being done Core Skills And Knowledge Proven experience in data engineering, with a strong focus on Microsoft Data Platform technologies, including Azure Data Factory, Azure SQL Database, and Databricks Proficiency in programming languages such as C# and/or Python, with experience in application development being a plus Experience with CI/CD pipelines, Azure, and Azure DevOps Strong experience and knowledge with PySpark and SQL with the ability to create solutions using Microsoft Fabric Ability to create solutions that query and work with web API's In-depth knowledge of Azure, containerisation, and Kubernetes Strong understanding of data architecture concepts, particularly data mesh principles Excellent problem-solving skills and the ability to work independently as a self-starter Strong communication and collaboration skills, with the ability to work effectively in a remote team environment Relevant degree in Computer Science, Data Engineering, or a related field is preferred As part of our collaborative & agile culture, our working week is 4 days in the office and one day remote. Investec offers a range of wellbeing benefits to make our people feel healthier, balanced and more fulfilled in their lives inside and outside of work. Embedded in our culture is a sense of belonging and inclusion. This creates an environment in which everyone is free to be themselves which helps to drive innovation, creativity and ultimately business performance. At Investec we want everyone to find it easy to be themselves, and to feel they belong. It's a responsibility we all share and is integral to our purpose and values as an organisation. Research shows that some candidates can be reluctant to apply to a role unless they meet all the criteria. We pride ourselves on our entrepreneurial spirit here and welcome you to do the same – if the role excites you, please don't let our person specification hold you back. Get in touch! Recite Me We commit to ensure that everyone is fairly assessed during our recruitment process. To assist candidates in completing their application form, Recite Me assistive technology is available on our Careers pages. This can be accessed by clicking on the ‘Accessibility Options' link at the top of the page. The Recite Me tool includes a screen reader, styling and customisation options, a series of reading aids, a translator and more. If you have any form of disability or neurodivergent need and require further assistance in completing your application, please contact the Careers team at CareersIGSI@investec.com who will be happy to assist. Apply Now Loading... Close map Location Mumbai Parinee Crescenzo, 11th floor, G Block BKC, Bandra Kurla Complex, Bandra (E, Mumbai, Maharashtra 400051, India, Mumbai, India Loading... Open In Google Maps Show more Show less
Posted 1 week ago
8.0 - 13.0 years
18 - 25 Lacs
Chennai, Bengaluru, Mumbai (All Areas)
Hybrid
As an Azure Data Engineer, we are looking for candidates who possess expertise in the following: Databricks Data Factory SQL Pyspark/Spark Roles and Responsibilities: As a part of our dynamic team, you will be responsible for: Designing, implementing, and maintaining data pipelines Collaborating with cross-functional teams to understand data requirements. Optimizing and troubleshooting data processes Leveraging Azure data services to build scalable solutions.
Posted 1 week ago
8.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Technical Skills: 8+ years of hands-on experience in SQL development, query optimization, and performance tuning. Expertise in ETL tools (SSIS, Azure ADF, Databricks, Snowflake or similar) and relational databases (SQL Server, PostgreSQL, MySQL, Oracle). Strong understanding of data warehousing concepts, data modeling, indexing strategies, and query execution plans. Proficiency in writing efficient stored procedures, views, triggers, and functions for large datasets. Experience working with structured and semi-structured data (CSV, JSON, XML, Parquet). Hands-on experience in data validation, cleansing, and reconciliation to maintain high data quality. Exposure to real-time and batch data processing techniques. Nice-to-have: Experience with Azure/Other Data Engineering (ADF, Azure SQL, Synapse, Databricks, Snowflake), Python, Spark, NoSQL databases, and reporting tools like Power BI or Tableau. Strong problem-solving skills and the ability to troubleshoot ETL failures and performance issues. Ability to collaborate with business and analytics teams to understand and implement data requirements. Show more Show less
Posted 1 week ago
7.0 - 11.0 years
15 - 25 Lacs
Mumbai, Mumbai (All Areas)
Work from Office
Key Responsibilities: Should have experience in below Design, develop, and implement a Data Lake House architecture on AWS, ensuring scalability, flexibility, and performance. Build ETL/ELT pipelines for ingesting, transforming, and processing structured and unstructured data. Collaborate with cross-functional teams to gather data requirements and deliver data solutions aligned with business needs. Develop and manage data models, schemas, and data lakes for analytics, reporting, and BI purposes. Implement data governance practices, ensuring data quality, security, and compliance. Perform data integration between on-premise and cloud systems using AWS services. Monitor and troubleshoot data pipelines and infrastructure for reliability and scalability. Skills and Qualifications: 7 + years of experience in data engineering, with a focus on cloud data platforms. Strong experience with AWS services: S3, Glue, Redshift, Athena, Lambda, IAM, RDS, and EC2. Hands-on experience in building data lakes, data warehouses, and lake house architectures. Should have experience in ETL/ELT pipelines using tools like AWS Glue, Apache Spark, or similar. Expertise in SQL and Python or Java for data processing and transformations. Familiarity with data modeling and schema design in cloud environments. Understanding of data security and governance practices, including IAM policies and data encryption. Experience with big data technologies (e.g., Hadoop, Spark) and data streaming services (e.g., Kinesis, Kafka). Have lending domain knowledge will be added advantage Preferred Skills: Experience with Databricks or similar platforms for data engineering. Familiarity with DevOps practices for deploying data solutions on AWS (CI/CD pipelines). Knowledge of API integration and cloud data migration strategies.
Posted 1 week ago
130.0 years
6 - 9 Lacs
Hyderābād
On-site
Job Description Senior Manager, Data Engineer The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview Responsibilities Designs, builds, and maintains data pipeline architecture - ingest, process, and publish data for consumption. Batch processes collected data, formats data in an optimized way to bring it analyze-ready Ensures best practices sharing and across the organization Enables delivery of data-analytics projects Develops deep knowledge of the company's supported technology; understands the whole complexity/dependencies between multiple teams, platforms (people, technologies) Communicates intensively with other platform/competencies to comprehend new trends and methodologies being implemented/considered within the company ecosystem Understands the customer and stakeholders business needs/priorities and helps building solutions that support our business goals Establishes and manages the close relationship with customers/stakeholders Has overview of the date engineering market development to be able to come up/explore new ways of delivering pipelines to increase their value/contribution Builds “community of practice” leveraging experience from delivering complex analytics projects Is accountable for ensuring that the team delivers solutions with high quality standards, timeliness, compliance and excellent user experience Contributes to innovative experiments, specifically to idea generation, idea incubation and/or experimentation, identifying tangible and measurable criteria Qualifications: Bachelor’s degree in Computer Science, Data Science, Information Technology, Engineering or a related field. 3+ plus years of experience as a Data Engineer or in a similar role, with a strong portfolio of data projects. 3+ plus years experience SQL skills, with the ability to write and optimize queries for large datasets. 1+ plus years experience and proficiency in Python for data manipulation, automation, and pipeline development. Experience with Databricks including creating notebooks and utilizing Spark for big data processing. Strong experience with data warehousing solution (such as Snowflake), including schema design and performance optimization. Experience with data governance and quality management tools, particularly Collibra DQ. Strong analytical and problem-solving skills, with an attention to detail. SAP Basis experience working on SAP S/4HANA deployments on Cloud platforms (example: AWS, GCP or Azure). Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who we are: We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What we look for: Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Business, Business, Business Intelligence (BI), Business Management, Contractor Management, Cost Reduction, Database Administration, Database Optimization, Data Engineering, Data Flows, Data Infrastructure, Data Management, Data Modeling, Data Optimization, Data Quality, Data Visualization, Design Applications, ETL Tools, Information Management, Management Process, Operating Cost Reduction, Senior Program Management, Social Collaboration, Software Development, Software Development Life Cycle (SDLC) {+ 1 more} Preferred Skills: Job Posting End Date: 08/13/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R350686
Posted 1 week ago
5.0 years
5 - 8 Lacs
Hyderābād
On-site
Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Your future duties and responsibilities Position: Senior Software Engineer Experience: 5-10 years Category: Software Development/ Engineering Shift Timings: 1:00 pm to 10:00 pm Main location: Hyderabad Work Type: Work from office Skill: Spark (PySpark), Python and SQL Employment Type: Full Time Position ID: J0625-0219 Required qualifications to be successful in this role Must have Skills: 5+ yrs. Development experience with Spark (PySpark), Python and SQL. Extensive knowledge building data pipelines Hands on experience with Databricks Devlopment Strong experience with Strong experience developing on Linux OS. Experience with scheduling and orchestration (e.g. Databricks Workflows,airflow, prefect, control-m). Good to have skills: Solid understanding of distributed systems, data structures, design principles. Agile Development Methodologies (e.g. SAFe, Kanban, Scrum). Comfortable communicating with teams via showcases/demos. Play key role in establishing and implementing migration patterns for the Data Lake Modernization project. Actively migrate use cases from our on premises Data Lake to Databricks on GCP. Collaborate with Product Management and business partners to understand use case requirements and reporting. Adhere to internal development best practices/lifecycle (e.g. Testing, Code Reviews, CI/CD, Documentation) . Document and showcase feature designs/workflows. Participate in team meetings and discussions around product development. Stay up to date on industry latest industry trends and design patterns. 3+ years experience with GIT. 3+ years experience with CI/CD (e.g. Azure Pipelines). Experience with streaming technologies, such as Kafka, Spark. Experience building applications on Docker and Kubernetes. Cloud experience (e.g. Azure, Google). Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 1 week ago
40.0 years
4 - 8 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-216713 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 12, 2025 CATEGORY: Information Systems ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human data to push beyond what’s known today. ABOUT THE ROLE Let’s do this. Let’s change the world. At Amgen, we believe that innovation can and should be happening across the entire company. Part of the Artificial Intelligence & Data function of the Amgen Technology and Medical Organizations (ATMOS), the AI & Data Innovation Lab (the Lab) is a center for exploration and innovation, focused on integrating and accelerating new technologies and methods that deliver measurable value and competitive advantage. We’ve built algorithms that predict bone fractures in patients who haven’t even been diagnosed with osteoporosis yet. We’ve built software to help us select clinical trial sites so we can get medicines to patients faster. We’ve built AI capabilities to standardize and accelerate the authoring of regulatory documents so we can shorten the drug approval cycle. And that’s just a part of the beginning. Join us! We are seeking a Senior DevOps Software Engineer to join the Lab’s software engineering practice. This role is integral to developing top-tier talent, setting engineering best practices, and evangelizing full-stack development capabilities across the organization. The Senior DevOps Software Engineer will design and implement deployment strategies for AI systems using the AWS stack, ensuring high availability, performance, and scalability of applications. Roles & Responsibilities: Design and implement deployment strategies using the AWS stack, including EKS, ECS, Lambda, SageMaker, and DynamoDB. Configure and manage CI/CD pipelines in GitLab to streamline the deployment process. Develop, deploy, and manage scalable applications on AWS, ensuring they meet high standards for availability and performance. Implement infrastructure-as-code (IaC) to provision and manage cloud resources consistently and reproducibly. Collaborate with AI product design and development teams to ensure seamless integration of AI models into the infrastructure. Monitor and optimize the performance of deployed AI systems, addressing any issues related to scaling, availability, and performance. Lead and develop standards, processes, and best practices for the team across the AI system deployment lifecycle. Stay updated on emerging technologies and best practices in AI infrastructure and AWS services to continuously improve deployment strategies. Familiarity with AI concepts such as traditional AI, generative AI, and agentic AI, with the ability to learn and adopt new skills quickly. Functional Skills: Deep expertise in designing and maintaining CI/CD pipelines and enabling software engineering best practices and overall software product development lifecycle. Ability to implement automated testing, build, deployment, and rollback strategies. Advanced proficiency managing and deploying infrastructure with the AWS cloud platform, including cost planning, tracking and optimization. Proficiency with backend languages and frameworks (Python, FastAPI, Flask preferred). Experience with databases (Postgres/DynamoDB) Experience with microservices architecture and containerization (Docker, Kubernetes). Good-to-Have Skills: Familiarity with enterprise software systems in life sciences or healthcare domains. Familiarity with big data platforms and experience in data pipeline development (Databricks, Spark). Knowledge of data security, privacy regulations, and scalable software solutions. Soft Skills: Excellent communication skills, with the ability to convey complex technical concepts to non-technical stakeholders. Ability to foster a collaborative and innovative work environment. Strong problem-solving abilities and attention to detail. High degree of initiative and self-motivation. Basic Qualifications: Bachelor’s degree in Computer Science, AI, Software Engineering, or related field. 5+ years of experience in full-stack software engineering. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 1 week ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Job Title: Automation Tester - Selenium, python, databricks Candidate Specification: 7 + years, Immediate to 30 days. Job Description Experience with Automated Testing. Ability to code and read a programming language (Python). Experience in pytest, selenium(python). Experience working with large datasets and complex data environments. Experience with airflow, Databricks, Data lake, Pyspark. Knowledge and working experience in Agile methodologies. Experience in CI/CD/CT methodology. Experience in Test methodologies. Skills Required RoleAutomation Tester Industry TypeIT/ Computers - Software Functional Area Required Education B Tech Employment TypeFull Time, Permanent Key Skills SELENIUM PYTHON DATABRICKS Other Information Job CodeGO/JC/100/2025 Recruiter NameSheena Rakesh Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Objectives and Purpose The Senior Data Engineer ingests, builds, and supports large-scale data architectures that serve multiple downstream systems and business users. This individual supports the Data Engineer Leads and partners with Visualization on data quality and troubleshooting needs. The Senior Data Engineer will: Clean, aggregate, and organize data from disparate sources and transfer it to data warehouses. Support development testing and maintenance of data pipelines and platforms, to enable data quality to be utilized within business dashboards and tools. Create, maintain, and support the data platform and infrastructure that enables the analytics front-end; this includes the testing, maintenance, construction, and development of architectures such as high-volume, large-scale data processing and databases with proper verification and validation processes. Your Key Responsibilities Data Engineering Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes, systems to provide accurate and available data to key stakeholders, downstream systems, & business processes. Mentor and coach staff data engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Support standardization, customization and ad hoc data analysis and develop the mechanisms to ingest, analyze, validate, normalize, and clean data. Write unit/integration/performance test scripts and perform data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Implement processes and systems to drive data reconciliation and monitor data quality, ensuring production data is always accurate and available for key stakeholders, downstream systems, and business processes. Lead the evaluation, implementation and deployment of emerging tools and processes for analytic data engineering to improve productivity. Develop and deliver communication and education plans on analytic data engineering capabilities, standards, and processes. Learn about machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics. Solve complex data problems to deliver insights that help achieve business objectives. Implement statistical data quality procedures on new data sources by applying rigorous iterative data analytics. Relationship Building and Collaboration Partner with Business Analytics and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Support Data Scientists in data sourcing and preparation to visualize data and synthesize insights of commercial value. Collaborate with AI/ML engineers to create data products for analytics and data scientist team members to improve productivity. Advise, consult, mentor and coach other data and analytic professionals on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Skills And Attributes For Success Technical/Functional Expertise Advanced experience and understanding of data/Big Data, data integration, data modelling, AWS, and cloud technologies. Strong business acumen with knowledge of the Pharmaceutical, Healthcare, or Life Sciences sector is preferred, but not required. Ability to build processes that support data transformation, workload management, data structures, dependency, and metadata. Ability to build and optimize queries (SQL), data sets, 'Big Data' pipelines, and architectures for structured and unstructured data. Experience with or knowledge of Agile Software Development methodologies. Leadership Strategic mindset of thinking above the minor, tactical details and focusing on the long-term, strategic goals of the organization. Advocate of a culture of collaboration and psychological safety. Decision-making and Autonomy Shift from manual decision-making to data-driven, strategic decision-making. Proven track record of applying critical thinking to resolve issues and overcome obstacles. Interaction Proven track record of collaboration and developing strong working relationships with key stakeholders by building trust and being a true business partner. Demonstrated success in collaborating with different IT functions, contractors, and constituents to deliver data solutions that meet standards and security measures. Innovation Passion for re-imagining new solutions, processes, and end-user experience by leveraging digital and disruptive technologies and developing advanced data and analytics solutions. Advocate of a culture of growth mindset, agility, and continuous improvement. Complexity Demonstrates high multicultural sensitivity to lead teams effectively. Ability to coordinate and problem-solve amongst larger teams. To qualify for the role, you must have the following: Essential Skillsets Bachelor’s degree in Engineering, Computer Science, Data Science, or related field 5+ years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience designing, building, implementing, and maintaining data and system integrations using dimensional data modelling and development and optimization of ETL pipelines Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python) Distributed Data Technologies (e.g., Pyspark) Cloud platform deployment and tools (e.g., Kubernetes) Relational SQL databases DevOps and continuous integration AWS cloud services and technologies (i.e., Lambda, S3, DMS, Step Functions, Event Bridge, Cloud Watch, RDS) Databricks/ETL IICS/DMS GitHub Event Bridge, Tidal Understanding of database architecture and administration Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases Processes high proficiency in code programming languages (e.g., SQL, Python, Pyspark, AWS services) to design, maintain, and optimize data architecture/pipelines that fit business goals Strong organizational skills with the ability to manage multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Desired Skillsets Masters Degree in Engineering, Computer Science, Data Science, or related field Experience in a global working environment Travel Requirements Access to transportation to attend meetings Ability to fly to meetings regionally and globally EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Hyderābād
On-site
Overview: Data Science Team works in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Azure Pipelines. You will be part of a collaborative interdisciplinary team around data, where you will be responsible of our continuous delivery of statistical/ML models. You will work closely with process owners, product owners and final business users. This will provide you the correct visibility and understanding of criticality of your developments. Responsibilities: Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Active contributor to code & development in projects and services Partner with data engineers to ensure data access for discovery and proper data is prepared for model consumption. Partner with ML engineers working on industrialization. Communicate with business stakeholders in the process of service design, training and knowledge transfer. Support large-scale experimentation and build data-driven models. Refine requirements into modelling problems. Influence product teams through data-based recommendations. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create reusable packages or libraries. Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Leverage big data technologies to help process data and build scaled data pipelines (batch to real time) Implement end-to-end ML lifecycle with Azure Databricks and Azure Pipelines Automate ML models deployments Qualifications: BE/B.Tech in Computer Science, Maths, technical fields. Overall 2-4 years of experience working as a Data Scientist. 2+ years’ experience building solutions in the commercial or in the supply chain space. 2+ years working in a team to deliver production level analytic solutions. Fluent in git (version control). Understanding of Jenkins, Docker are a plus. Fluent in SQL syntaxis. 2+ years’ experience in Statistical/ML techniques to solve supervised (regression, classification) and unsupervised problems. 2+ years’ experience in developing business problem related statistical/ML modeling with industry tools with primary focus on Python or Pyspark development. Data Science – Hands on experience and strong knowledge of building machine learning models – supervised and unsupervised models. Knowledge of Time series/Demand Forecast models is a plus Programming Skills – Hands-on experience in statistical programming languages like Python, Pyspark and database query languages like SQL Statistics – Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Cloud (Azure) – Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Business storytelling and communicating data insights in business consumable format. Fluent in one Visualization tool. Strong communications and organizational skills with the ability to deal with ambiguity while juggling multiple priorities Experience with Agile methodology for team work and analytics ‘product’ creation. Experience in Reinforcement Learning is a plus. Experience in Simulation and Optimization problems in any space is a plus. Experience with Bayesian methods is a plus. Experience with Causal inference is a plus. Experience with NLP is a plus. Experience with Responsible AI is a plus. Experience with distributed machine learning is a plus Experience in DevOps, hands-on experience with one or more cloud service providers AWS, GCP, Azure(preferred) Model deployment experience is a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is preferred Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills Stakeholder engagement-BU, Vendors. Experience building statistical models in the Retail or Supply chain space is a plus
Posted 1 week ago
1.0 years
12 Lacs
Hyderābād
On-site
Job Title: Scrum Master | Location: Hyderabad / Chennai Start Date: Immediate Duration: 6 Months to 1 Year (extendable) Key Responsibilities: Lead agile teams focused on data ingestion, transformation, integration, and reporting Facilitate all core agile ceremonies: Sprint Planning, Standups, Reviews, Retrospectives Partner with Product Owners to maintain a clear, actionable backlog aligned with business priorities Support platforms including SAP ECC, IBP, HANA, BOBJ, Databricks, and TableauTrack team velocity, remove blockers, and foster continuous improvement Drive clarity, focus, and accountability across team members and deliverables Encourage a mindset of building value over completing tasks Guide teams in breaking down work into smaller, testable, and achievable components Monitor Agile metrics (velocity, burndown) and communicate progress to stakeholders Promote a culture of feedback, collaboration, and agility across global teams Must-Have Qualifications: 3–5 years of experience as a Scrum Master with strong exposure to SAP, HANA, and data analytics ecosystems Deep understanding of Agile values, not just frameworks Strong interpersonal, time management, and organizational skills Ability to guide team dynamics, inspire accountability, and foster better collaboration Excellent communication skills with a knack for tailoring messages for technical and business audiences Experience working with global, cross-functional teamsFlexible to work occasional hours outside standard business schedules Job Type: Contractual / Temporary Contract length: 6 months Pay: Up to ₹1,200,000.00 per year Work Location: In person
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Location(s): Tower -11, (IT/ITES) SEZ of M/s Gurugram Infospace Ltd, Vill. Dundahera, Sector-21, Gurugram, Haryana, Gurugram, Haryana, 122016, IN Line Of Business: Data Estate(DE) Job Category: Engineering & Technology Experience Level: Experienced Hire At Moody's, we unite the brightest minds to turn today’s risks into tomorrow’s opportunities. We do this by striving to create an inclusive environment where everyone feels welcome to be who they are-with the freedom to exchange ideas, think innovatively, and listen to each other and customers in meaningful ways. If you are excited about this opportunity but do not meet every single requirement, please apply! You still may be a great fit for this role or other open roles. We are seeking candidates who model our values: invest in every relationship, lead with curiosity, champion diverse perspectives, turn inputs into actions, and uphold trust through integrity. Job Summary: The Data Specialist will explore and transform an existing data remediation environment to ensure the smooth execution and automation of data validation, reporting, and analysis tasks. The ideal candidate will have strong technical skills in Excel, SQL, and Python with proficiency in using Microsoft Office tools for reporting, and familiarity with data visualization tools like Power BI or Tableau. Excellent communication and leadership skills are essential to foster a collaborative and productive team environment. Responsibilities may include leading small or large teams of Full time Employees or contractors to focus on remediating data at scale. Key Responsibilities: Team Management: Work with strategic teams of 5-10 or more data analysts and specialists, as needed for specific initiatives Provide guidance, mentorship, and support to team members to achieve individual and team goals. Data Validation and Analysis: Oversee data validation processes to ensure accuracy and completeness of data. Utilize Excel, SQL, and Python for data manipulation, analysis, and validation tasks. Implement best practices for data quality and integrity. Quality Assurance (QA): Establish and maintain QA processes to ensure the accuracy and reliability of data outputs. Conduct regular audits and reviews of data processes to identify and rectify errors. Develop and enforce data governance policies and procedures. Reporting and Presentation: Create and maintain comprehensive reports using Microsoft PowerPoint, Word, and other tools. Develop insightful data visualizations and dashboards using Power BI and Tableau. Present data findings and insights to stakeholders in a clear and concise manner. Collaboration and Communication: Collaborate with cross-functional teams to understand data needs and deliver solutions. Communicate effectively with team members, stakeholders, and clients. Facilitate team meetings and discussions to ensure alignment and progress on projects. Continuous Improvement: Identify opportunities for process improvements and implement changes to enhance efficiency. Stay updated with industry trends and advancements in data management and reporting tools. Foster a culture of continuous learning and development within the team. Qualifications: Bachelor's degree in Economics, Statistics, Computer Science, Information Technology or other related fields. 3+ Years of relevant experience in similar field. Strong proficiency in Excel, SQL, and Python for data analysis and validation. Advanced skills in Microsoft PowerPoint, Word, and other reporting tools. Familiarity with Power BI and Tableau for data visualization. Experience with Databricks Excellent communication, leadership, and interpersonal skills. Strong problem-solving abilities and attention to detail. Ability to work independently and manage multiple priorities in a fast-paced environment. Moody’s is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, sexual orientation, gender expression, gender identity or any other characteristic protected by law. Candidates for Moody's Corporation may be asked to disclose securities holdings pursuant to Moody’s Policy for Securities Trading and the requirements of the position. Employment is contingent upon compliance with the Policy, including remediation of positions in those holdings as necessary. For more information on the Securities Trading Program, please refer to the STP Quick Reference guide on ComplianceNet Please note: STP categories are assigned by the hiring teams and are subject to change over the course of an employee’s tenure with Moody’s. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Responsible to assemble large, complex sets of data that meet non-functional and functional business requirements. Responsible to identify, design and implement internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes. Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using Azure, Databricks and SQL technologies Responsible for the transformation of conceptual algorithms from R&D into efficient, production ready code. The data developer must have a strong mathematical background in order to be able to document and maintain the code Responsible for integrating finished models into larger data processes using UNIX scripting languages such as ksh, Python, Spark, Scala, etc. Produce and maintain documentation for released data sets, new programs, shared utilities, or static data. This must be done within department standards Ensure quality deliverables to clients by following existing quality processes, manually calculating comparison data, developing statistical pass/fail testing, and visually inspecting data for reasonableness: the requirement is on-time with zero defects Qualifications Education/Training B.E./B.Tech. with a major in Computer Science, BIS, CIS, Electrical Engineering, Operations Research or some other technical field. Course work or experience in Numerical Analysis, Mathematics or Statistics is a plus Hard Skills Proven experience working as a data engineer Highly proficient in using the spark framework (python and/or Scala) Extensive knowledge of Data Warehousing concepts, strategies, methodologies. Programming experience in Python, SQL, Scala Direct experience of building data pipelines using Apache Spark (preferably in Databricks), Airflow. Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake Experience with big data technologies (Hadoop) Databricks & Azure Big Data Architecture Certification would be plus Must be team oriented with strong collaboration, prioritization, and adaptability skills required Ability to write highly efficient code in terms of performance / memory utilization Basic knowledge of SQL; capable of handling common functions Experience Minimum 5 -8 year of experience as Data engineer Experience modeling or manipulating large amounts of data is a plus Experience with Demographic, Retail business is a plus Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion Show more Show less
Posted 1 week ago
8.0 - 10.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Role : 8-10 years' experience Senior Data Modeler. Wil be designing new database on AWS for legacy to cloud migration. Responsibilities Analyze and translate business needs into data models Develop conceptual, logical, and physical data models Create and enforce database development standards Validate and reconcile data models to ensure accuracy Maintain and update existing data models Collaborate with data architects, data analysts, and business users Support the implementation of data management strategies Create data definitions and metadata Identify opportunities to optimize data usage Qualifications Proven work experience as a Data Modeler or Data Architect for Cloud environments Hands-on experience of migrating legacy database to cloud database Experience in data modeling principles/methods including conceptual, logical & physical Data Models Knowledge of database structure systems and data mining Excellent analytical and problem-solving skills Familiarity with data visualization tools Experience with SQL and data modeling tools Understanding of agile methodologies Preferred experience in AWS, Databricks Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Description JMAN Group is a growing technology-enabled management consultancy that empowers organizations to create value through data. Founded in 2010, we are a team of 450+ consultants based in London, UK, and a team of 300+ engineers in Chennai, India. Having delivered multiple projects in the US, we are now opening a new office in New York to help us support and grow our US client base. We approach business problems with the mindset of a management consultancy and the capabilities of a tech company. We work across all sectors, and have in depth experience in private equity, pharmaceuticals, government departments and high-street chains. Our team is as cutting edge as our work. We take pride for ourselves on being great to work with – no jargon or corporate-speak, flexible to change and receptive of feedback. We have a huge focus on investing in the training and professional development of our team, to ensure they can deliver high quality work and shape our journey to becoming a globally recognised brand. The business has grown quickly in the last 3 years with no signs of slowing down. Technical specifications 5 + years of experience in data platform builds. Familiarity with multi cloud data warehousing solutions (Snowflake, Redshift, Databricks, Fabric, AWS Glue, Azure Data Factory, Synapse, Matillion,DBT ). Proficient in SQL, Apache Spark / Python programming languages. Good to have skills includes Data visualization using Power BI, Tableau, or Looker, and familiarity with full-stack technologies. Experience with containerization technologies (e.g., Docker, Kubernetes) Experience with CI/CD pipelines and DevOps methodologies. Ability to work independently, adapt to changing priorities, and learn new technologies quickly. Experience in implementing or working with data governance frameworks and practices to ensure data integrity and regulatory compliance. Knowledge of data quality tools and practices. Responsibilities Design and implement data pipelines using ETL/ELT tools and techniques. Configure and manage data storage solutions, including relational databases, data warehouses, and data lakes. Develop and implement data quality checks and monitoring processes. Automate data platform deployments and operations using scripting and DevOps tools (e.g., Git, CI/CD pipeline). Ensuring compliance with data governance and security standards throughout the data platform development process. Troubleshoot and resolve data platform issues promptly and effectively. Collaborate with the Data Architect to understand data platform requirements and design specifications. Assist with data modelling and optimization tasks. Work with business stakeholders to translate their needs into technical solutions. Document the data platform architecture, processes, and best practices. Stay up to date with the latest trends and technologies in full stack development, data engineering, and DevOps. Proactively suggest improvements and innovations for the data platform. Requirements Required Skillset: ETL or ELT : AWS Glue/ Azure Data Factory/ Synapse/ Matillion/dbt. Data Warehousing : Azure SQL Server/Redshift/Big Query/Databricks/Snowflake/fabric (Anyone - Mandatory). Data Visualization : Looker, Power BI, Tableau. SQL and Apache Spark / Python programming languages Containerization technologies (e.g., Docker, Kubernetes) Cloud Experience : AWS/Azure/GCP. Scripting and DevOps tools (e.g., Git, CI/CD pipeline) Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Databricks is a popular technology in the field of big data and analytics, and the job market for Databricks professionals in India is growing rapidly. Companies across various industries are actively looking for skilled individuals with expertise in Databricks to help them harness the power of data. If you are considering a career in Databricks, here is a detailed guide to help you navigate the job market in India.
The average salary range for Databricks professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum
In the field of Databricks, a typical career path may include: - Junior Developer - Senior Developer - Tech Lead - Architect
In addition to Databricks expertise, other skills that are often expected or helpful alongside Databricks include: - Apache Spark - Python/Scala programming - Data modeling - SQL - Data visualization tools
As you prepare for Databricks job interviews, make sure to brush up on your technical skills, stay updated with the latest trends in the field, and showcase your problem-solving abilities. With the right preparation and confidence, you can land your dream job in the exciting world of Databricks in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
16951 Jobs | Dublin
Wipro
9154 Jobs | Bengaluru
EY
7414 Jobs | London
Amazon
5846 Jobs | Seattle,WA
Uplers
5736 Jobs | Ahmedabad
IBM
5617 Jobs | Armonk
Oracle
5448 Jobs | Redwood City
Accenture in India
5221 Jobs | Dublin 2
Capgemini
3420 Jobs | Paris,France
Tata Consultancy Services
3151 Jobs | Thane