Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 8.0 years
1 - 6 Lacs
Noida
Work from Office
Urgent Hiring... Microsoft Fabric Cloud Architect 6-8yrs Noida Immediate to 30 days Skills- Azure Cloud, MS Fabric, Py spark, DAX, Python, Azure Synapse, ADF, Data Bricks, MS-Fabric, ETL Pipelines.
Posted 2 weeks ago
6.0 - 9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Hi all, This is an exciting career opportunity for MSBI developer position. Please find the JD below, Experience on MSBI, SSIS, SSRS, SQL Server Experience on ADF or Azure Location - Pune, Bangalore, Hyderabad, Chennai Experience - 6-9 years Notice period - Immediate or max 10days If you are interested , please share your profile to jeyaramya.rajendran@zensar.com
Posted 2 weeks ago
3.0 - 8.0 years
20 - 25 Lacs
Bengaluru
Hybrid
About Quantzig: Quantzig is a global analytics and advisory firm with offices in the US, UK, Canada, China, and India. we have assisted our clients across the globe with end-to-end advanced analytics, visual storyboarding , Machine Learning and data engineering solutions implementation for prudent decision making. We are a rapidly growing organization that is built and operated by high-performance champions If you have what it takes to be the champion with business and functional skills to take ownership of an entire project end-to-end, help build a team with great work ethic and a drive to learn, you are the one were looking for. The clients love us for our solutioning capability, our enthusiasm and we expect you to be a part of our growth story. Company Website: https://www.quantzig.com/ Looking for Immediate Joiners Job Title: Hybrid BI + SAP Purpose of the Role We are looking for a technically strong and business-savvy Data Engineering & Analytics Specialist to support critical analytics workflows built on Azure and Databricks. The ideal candidate will bring hands-on experience in SQL, SAP data extraction, PySpark scripting, and pipeline orchestration via ADF, while also collaborating with business stakeholders to drive value from data. This is a fast-paced role that demands ownership, agility, and end-to-end delivery mindset. Key Responsibilities Design, develop, and maintain robust data pipelines using Azure Data Factory and Databricks (PySpark) . Extract, transform, and load data from SAP systems and other enterprise sources, ensuring reliability and performance. Write complex SQL queries and optimize data processing workflows for analytical consumption. Work with business stakeholders to translate functional requirements into technical implementations. Support ad-hoc analyses and dashboard development in Excel and Power BI , enabling self-service and visualization. Ensure data governance, validation, and documentation across workflows. Collaborate with cross-functional teams across Data Engineering, BI, and Analytics. Contribute to process automation, pipeline scalability, and performance improvements. Own project delivery timelines and drive issue resolution with minimal supervision. Required Qualifications 35 years of relevant experience in data engineering or advanced analytics roles. Strong expertise in SQL for data manipulation and business rule logic. Hands-on experience working with SAP data structures, extractions, and integration. Strong command over Databricks (PySpark) and ADF for data orchestration and transformation. Proficient in Python for scripting and automation. Good understanding of Azure platform services , especially related to data storage and compute. Intermediate proficiency in Excel and Power BI for reporting and stakeholder engagement. Strong verbal and written communication skills ; ability to gather requirements and explain solutions to non-technical stakeholders. Demonstrated experience in managing or leading project deliverables independently . Preferred Qualifications Experience working in notebook-based environments like Azure Databricks or Jupyter. Familiarity with DevOps, Git-based version control , and CI/CD pipelines for data workflows. Exposure to data warehousing concepts and building scalable ETL frameworks. Previous experience in a client-facing or delivery management role.
Posted 2 weeks ago
2.0 - 7.0 years
0 Lacs
India
On-site
WHO WE ARE Sapaad is a global leader in all-in-one unified commerce platforms, dedicated to delivering world-class software solutions. Its flagship product, Sapaad, has seen tremendous success in the last decade, with thousands of customers worldwide, and many more signing on. Driven by a team of passionate developers and designers, Sapaad is constantly innovating, introducing cutting-edge features that reshape the industry. Headquartered in Singapore, with offices across five countries, Sapaad is backed by technology veterans with deep expertise in web, mobility, and e commerce, making it a key player in the tech landscape. THE OPPORTUNITY Sapaad PTE LTD is seeking a Data Engineer who will take charge of constructing our distributed processing and big data infrastructure, as well as the accompanying applications and tools. We're looking for someone with a fervent drive to tackle intricate data challenges and collaborate closely with the data team, all while staying abreast of the latest features and tools in Big Data and Data Science. The successful candidate will play a pivotal role in supporting software developers, data architects, data analysts, and data scientists in various data initiatives. He/She will ensure the smooth and efficient delivery of data across ongoing projects. We require an individual who is self-directed and capable of adeptly managing the data requirements of multiple teams, systems, and products. ROLES AND RESPONSIBILITIES Create and maintain optimal data pipeline architecture. Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics. Work with stakeholders including the Executive, Product, Data Architects, and Design teams to assist with data-related technical issues and support their data infrastructure needs. To keep the data separated and secure. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. ROLE REQUIREMENTS Candidate with 2 to 7 years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, IT or Statistics, or another quantitative field. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency, and workload management. Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores. Good communication skills and team player. Experience supporting and working with cross-functional teams in a dynamic environment. Preferred to have experience using the following software/tools: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases. Experience with data pipeline / ETL tools like Informatica, DataStage, and with any cloud tool like Azure Data Factory (ADF), etc. Experience with data engineering on cloud services like Azure, AWS, or GCP. Experience with stream-processing systems: Storm, Spark-Streaming, etc. Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.
Posted 2 weeks ago
7.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Manager – Azure Data Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Managers (Big Data Architects) with strong technology and data understanding having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Develop standardized practices for delivering new products and capabilities using Big Data & cloud technologies, including data acquisition, transformation, analysis, Modelling, Governance & Data management skills Interact with senior client technology leaders, understand their business goals, create, propose solution, estimate effort, build architectures, develop and deliver technology solutions Define and develop client specific best practices around data management within a cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using ADB, ADF, PySpark, Python, Snypase Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Have managed team and have experience in end to end delivery Have experience of building technical capability and teams to deliver Skills And Attributes For Success Strong understanding & familiarity with all Cloud Ecosystem components Strong understanding of underlying Cloud Architectural concepts and distributed computing paradigms Experience in the development of large scale data processing. Hands-on programming experience in ADB, ADF, Synapse, Python, PySpark, SQL Hands-on expertise in cloud services like AWS, and/or Microsoft Azure eco system Solid understanding of ETL methodologies in a multi-tiered stack with Data Modelling & Data Governance Experience with BI, and data analytics databases Experience in converting business problems/challenges to technical solutions considering security, performance, scalability etc. Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Strong stakeholder, client, team, process & delivery management skills To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Project management skills Client management skills Solutioning skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Functional Responsibility Having sound knowledge of banking domain (Wholesale, retail, core banking, trade finance) In-depth understanding of RBI Regulatory reporting and guidelines including RBI ADF approach document. Should have experience in handling various important regulatory returns like Form- A, Form VIII (SLR), Form X, BSR, SFR (Maintenance of CRR) ,DSB Returns, Forex, Priority sector lending related returns to RBI Should have an understanding of balance sheet and P&L. Supporting clients by providing user manuals, trainings, conducting workshops and preparing case studies. Process Adherence Review the initial and ongoing development of product Responsible for documenting, validating, communicating and coordinating requirements. Provide support to business development by preparing proposals, concept presentations and outreach activities Maintaining and updating tracker, reviewing test cases, providing training to internal as well as external stakeholders Client Management / Stakeholder Management Interact with clients in relation to assignment execution and manage operational relationships effectively Interact with client for requirement gathering, issue tracking, change request discussion, FRD writing and preparing project status reports People Development Co-ordinate with assignment-specific team of consultants, developers, QA and monitor performance to ensure timely and effective delivery
Posted 2 weeks ago
7.0 years
0 Lacs
Kochi, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Manager – Azure Data Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Managers (Big Data Architects) with strong technology and data understanding having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Develop standardized practices for delivering new products and capabilities using Big Data & cloud technologies, including data acquisition, transformation, analysis, Modelling, Governance & Data management skills Interact with senior client technology leaders, understand their business goals, create, propose solution, estimate effort, build architectures, develop and deliver technology solutions Define and develop client specific best practices around data management within a cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using ADB, ADF, PySpark, Python, Snypase Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Have managed team and have experience in end to end delivery Have experience of building technical capability and teams to deliver Skills And Attributes For Success Strong understanding & familiarity with all Cloud Ecosystem components Strong understanding of underlying Cloud Architectural concepts and distributed computing paradigms Experience in the development of large scale data processing. Hands-on programming experience in ADB, ADF, Synapse, Python, PySpark, SQL Hands-on expertise in cloud services like AWS, and/or Microsoft Azure eco system Solid understanding of ETL methodologies in a multi-tiered stack with Data Modelling & Data Governance Experience with BI, and data analytics databases Experience in converting business problems/challenges to technical solutions considering security, performance, scalability etc. Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Strong stakeholder, client, team, process & delivery management skills To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Project management skills Client management skills Solutioning skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 weeks ago
2.0 years
4 - 5 Lacs
India
On-site
Urgent Hiring Job description Role & responsibilities Design and maintain scalable data pipelines using PySpark. Work with Databricks for seamless data workflows. Use Azure Data Factory (ADF) to orchestrate data movement. Collaborate with data scientists and analysts to meet data needs. Ensure data quality and troubleshoot any issues. Preferred candidate profile Bachelors degree in Computer Science, Engineering, or related field. Experience with PySpark, Databricks, and ADF. Familiarity with cloud platforms (Azure preferred). Strong SQL and NoSQL database skills. Excellent problem-solving abilities and a team player attitude. Perks and benefits Competitive salary and benefits. Professional growth opportunities. Collaborative and innovative work environment. Job Type: Full-time Pay: ₹400,000.00 - ₹500,000.00 per year Benefits: Flexible schedule Experience: ETL: 2 years (Required) Azure : 2 years (Required) Databricks: 2 years (Required) Work Location: In person
Posted 2 weeks ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Data Engineer Work Mode: Hybrid (3 Days from Office – Only 4 Hours Onsite per Day) Location: Gurgaon About the Role BayOne is looking for a skilled Data Engineer to join our dynamic team in Gurgaon. This hybrid role offers flexibility, with just 4 hours per day required in-office , 3 days a week. If you're passionate about building scalable data solutions using Azure and Databricks and thrive in a fast-paced environment, we'd love to hear from you. Key Responsibilities Design and build scalable data pipelines and data lake/warehouse solutions on Azure and Databricks . Work extensively with SQL , schema design, and dimensional data modeling . Develop and maintain ETL/ELT processes using tools like ADF, Talend, Informatica , etc. Leverage Azure Synapse, Azure SQL, Snowflake, Redshift, or BigQuery to manage and optimize data storage and retrieval. Utilize Spark, PySpark, and Spark SQL for big data processing. Collaborate cross-functionally to gather requirements, design solutions, and implement best practices in data engineering. Required Qualifications Minimum 5 years of experience in data engineering, data warehousing, or data lake technologies. Strong experience on Azure cloud platform (preferred over others). Proven expertise in SQL , data modeling, and data warehouse architecture. Hands-on with Databricks, Spark , and proficient programming in PySpark/Spark SQL . Experience with ETL/ELT tools such as Azure Data Factory (ADF) , Talend , or Informatica . Strong communication skills and the ability to thrive in a fast-paced, dynamic environment . Self-motivated, independent learner with a proactive mindset. Nice-to-Have Skills Knowledge of Azure Event Hub , IoT Hub , Stream Analytics , Cosmos DB , and Azure Analysis Services . Familiarity with SAP ECC, S/4HANA, or HANA data sources. Intermediate skills in Power BI , Azure DevOps , CI/CD pipelines , and cloud migration strategies . About BayOne BayOne is a 12-year-old software consulting company headquartered in Pleasanton, California . We specialize in Talent Solutions , helping clients build diverse and high-performing teams. Our mission is to #MakeTechPurple by driving diversity in tech while delivering cutting-edge solutions across: Project & Program Management Cloud & IT Infrastructure Big Data & Analytics Software & Quality Engineering User Experience Design Explore More: 🔗 Company Website 🔗 LinkedIn 🔗 Glassdoor Reviews Join us to shape the future of data-driven decision-making while working in a flexible and collaborative environment.
Posted 2 weeks ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Role: Senior Dot Net Developer Experience: 8+ years (Must) Notice period: Immediate Only Location: Trivandrum/Kochi (all the bold highlighted are must skills) Introduction: Candidates with 8+ years of experience in IT industry and with strong .Net/.Net Core/Azure Cloud Service/ Azure DevOps. This is a client facing role and hence should have strong communication skills. This is for a US client and the resource should be hands-on - experience in coding and Azure Cloud. Working hours - 8 hours , with a 4 hours of overlap during EST Time zone. ( 12 PM - 9 PM) This overlap hours is mandatory as meetings happen during this overlap hours Responsibilities include: • Design, develop, enhance, document, and maintain robust applications using .NET Core 6/8+, C#, REST APIs, T-SQL, and modern JavaScript/jQuery • Integrate and support third-party APIs and external services • Collaborate across cross-functional teams to deliver scalable solutions across the full technology stack • Identify, prioritize, and execute tasks throughout the Software Development Life Cycle (SDLC) • Participate in Agile/Scrum ceremonies and manage tasks using Jira • Understand technical priorities, architectural dependencies, risks, and implementation challenges • Troubleshoot, debug, and optimize existing solutions with a strong focus on performance and reliability Certifications : • Microsoft Certified: Azure Fundamentals • Microsoft Certified: Azure Developer Associate • Other relevant certifications in Azure, .NET, or Cloud technologies Primary Skills : 8+ years of hands-on development experience with: • C#, .NET Core 6/8+, Entity Framework / EF Core • JavaScript, jQuery, REST APIs • Expertise in MS SQL Server, including: • Complex SQL queries, Stored Procedures, Views, Functions, Packages, Cursors, Tables, and Object Types • Skilled in unit testing with XUnit, MSTest • Strong in software design patterns, system architecture, and scalable solution design • Ability to lead and inspire teams through clear communication, technical mentorship, and ownership • Strong problem-solving and debugging capabilities • Ability to write reusable, testable, and efficient code • Develop and maintain frameworks and shared libraries to support large-scale applications • Excellent technical documentation, communication, and leadership skills • Microservices and Service-Oriented Architecture (SOA) • Experience in API Integrations 2+ years of hands with Azure Cloud Services, including: • Azure Functions • Azure Durable Functions • Azure Service Bus, Event Grid, Storage Queues • Blob Storage, Azure Key Vault, SQL Azure • Application Insights, Azure Monitoring Secondary Skills: • Familiarity with AngularJS, ReactJS, and other front-end frameworks • Experience with Azure API Management (APIM) • Knowledge of Azure Containerization and Orchestration (e.g., AKS/Kubernetes) • Experience with Azure Data Factory (ADF) and Logic Apps • Exposure to Application Support and operational monitoring • Azure DevOps - CI/CD pipelines (Classic / YAML)
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Your Role and Impact We are seeking a Data Analyst with T-SQL / Power BI / Python capability to integrate/migrate data across portfolios and produce effective Power BI dashboards / reports through optimization and model creation. This is an individual contributor role, and the individual would be working closely with Global IT team for capturing the right data requirements from stakeholders, support program operational activities and develop PBI dashboards/reports for data driven decision making. Your Contribution Strong experience in creation and maintenance of SQL queries, Views, and Stored Procs Design compelling and intuitive data visualizations in Power BI to convey insights effectively. Customize and format visuals to meet business requirements. Experience with Azure cloud platform and their data services like ADF, ADLS, Synapse is preferred. Manage Data Analysis and Migration activities on critical Projects on ERP, CRM and PLM. Strong experience in MS SQL, SSRS & SSIS
Posted 2 weeks ago
6.0 years
0 Lacs
India
Remote
AI/ML Engineer – Senior Consultant AI Engineering Group is part of Data Science & AI Competency Center and is focusing technical and engineering aspects of DS/ML/AI solutions. We are looking for experienced AI/ML Engineers to join our team to help us bring AI/ML solutions into production, automate processes, and define reusable best practices and accelerators. Duties description: The person we are looking for will become part of DataScience and AI Competency Center working in AI Engineering team. The key duties are: Building high-performing, scalable, enterprise-grade ML/AI applications in cloud environment Working with Data Science, Data Engineering and Cloud teams to implement Machine Learning models into production Practical and innovative implementations of ML/AI automation, for scale and efficiency Design, delivery and management of industrialized processing pipelines Defining and implementing best practices in ML models life cycle and ML operations Implementing AI/MLOps frameworks and supporting Data Science teams in best practices Gathering and applying knowledge on modern techniques, tools and frameworks in the area of ML Architecture and Operations Gathering technical requirements & estimating planned work Presenting solutions, concepts and results to internal and external clients Being Technical Leader on ML projects, defining task, guidelines and evaluating results Creating technical documentation Supporting and growing junior engineers Must have skills: Good understanding of ML/AI concepts: types of algorithms, machine learning frameworks, model efficiency metrics, model life-cycle, AI architectures Good understanding of Cloud concepts and architectures as well as working knowledge with selected cloud services, preferably GCP Experience in programming ML algorithms and data processing pipelines using Python At least 6-8 years of experience in production ready code development Experience in designing and implementing data pipelines Practical experience with implementing ML solutions on GCP Vertex.AI and/or Databricks Good communication skills Ability to work in team and support others Taking responsibility for tasks and deliverables Great problem-solving skills and critical thinking Fluency in written and spoken English. Nice to have skills & knowledge: Practical experience with other programming languages: PySpark, Scala, R, Java Practical experience with tools like AirFlow, ADF or Kubeflow Good understanding of CI/CD and DevOps concepts, and experience in working with selected tools (preferably GitHub Actions, GitLab or Azure DevOps) Experience in applying and/or defining software engineering best practices Experience productization ML solutions using technologies like Docker/Kubernetes We Offer: Stable employment. On the market since 2008, 1300+ talents currently on board in 7 global sites. 100% remote. Flexibility regarding working hours. Full-time position Comprehensive online onboarding program with a “Buddy” from day 1. Cooperation with top-tier engineers and experts. Internal Gallup Certified Strengths Coach to support your growth. Unlimited access to the Udemy learning platform from day 1. Certificate training programs. Lingarians earn 500+ technology certificates yearly. Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. Grow as we grow as a company. 76% of our managers are internal promotions. A diverse, inclusive, and values-driven community. Autonomy to choose the way you work. We trust your ideas. Create our community together. Refer your friends to receive bonuses. Activities to support your well-being and health. Plenty of opportunities to donate to charities and support the environment. Please click on this link to submit your application: https://system.erecruiter.pl/FormTemplates/RecruitmentForm.aspx?WebID=ac709bd295cc4008af7d0a7a0e465818
Posted 2 weeks ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role: Data Architect (Databricks) Location: Pune (WFO) · Total Experience: 10 to 15 years · 8+ years of experience in data engineering/architecture roles. · 5+ years’ experience with Databricks · Strong SQL and performance tuning skills. · Experience with data orchestration tools like Airflow, ADF, or dbt. · Proficient in Spark (PySpark or Scala) and Python. · Experience working with cloud platforms (AWS, Azure, or GCP). · Familiar with DevOps practices for data platforms (Git, CI/CD, Infrastructure-as-Code). · Solid understanding of data governance, security best practices, and data quality frameworks. · Excellent communication and documentation skills.
Posted 2 weeks ago
8.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction: Candidates with 8+ years of experience in IT industry and with strong .Net/.Net Core/Azure Cloud Service/ Azure DevOps. This is a client facing role and hence should have strong communication skills. This is for a US client and the resource should be hands-on - experience in coding and Azure Cloud. Working hours - 8 hours , with a 4 hours of overlap during EST Time zone. ( 12 PM - 9 PM) This overlap hours is mandatory as meetings happen during this overlap hours Responsibilities include: • Design, develop, enhance, document, and maintain robust applications using .NET Core 6/8+, C#, REST APIs, T-SQL, and modern JavaScript/jQuery • Integrate and support third-party APIs and external services • Collaborate across cross-functional teams to deliver scalable solutions across the full technology stack • Identify, prioritize, and execute tasks throughout the Software Development Life Cycle (SDLC) • Participate in Agile/Scrum ceremonies and manage tasks using Jira • Understand technical priorities, architectural dependencies, risks, and implementation challenges • Troubleshoot, debug, and optimize existing solutions with a strong focus on performance and reliability • Certifications : • Microsoft Certified: Azure Fundamentals • Microsoft Certified: Azure Developer Associate • Other relevant certifications in Azure, .NET, or Cloud technologies Primary Skills : 8+ years of hands-on development experience with: • C#, .NET Core 6/8+, Entity Framework / EF Core • JavaScript, jQuery, REST APIs • Expertise in MS SQL Server, including: • Complex SQL queries, Stored Procedures, Views, Functions, Packages, Cursors, Tables, and Object Types • Skilled in unit testing with XUnit, MSTest • Strong in software design patterns, system architecture, and scalable solution design • Ability to lead and inspire teams through clear communication, technical mentorship, and ownership • Strong problem-solving and debugging capabilities • Ability to write reusable, testable, and efficient code • Develop and maintain frameworks and shared libraries to support large-scale applications • Excellent technical documentation, communication, and leadership skills • Microservices and Service-Oriented Architecture (SOA) • Experience in API Integrations 2+ years of hands with Azure Cloud Services, including: • Azure Functions • Azure Durable Functions • Azure Service Bus, Event Grid, Storage Queues • Blob Storage, Azure Key Vault, SQL Azure • Application Insights, Azure Monitoring Secondary Skills: • Familiarity with AngularJS, ReactJS, and other front-end frameworks • Experience with Azure API Management (APIM) • Knowledge of Azure Containerization and Orchestration (e.g., AKS/Kubernetes) • Experience with Azure Data Factory (ADF) and Logic Apps • Exposure to Application Support and operational monitoring • Azure DevOps - CI/CD pipelines (Classic / YAML
Posted 2 weeks ago
8.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Role: Data Engineer Primary Skills: ADF, Data warehousing/Data modeling, ETL, SSIS, Pyspark, SQL & Power BI Job Title: Senior Data Engineer Experience: 8 to 12 Years Location: Remote Shift Timing: 12:30 PM to 9:30 PM IST Employment Type: Full-time Job Summary: We are seeking a highly skilled and experienced Data Engineer with 68 years of experience in data engineering and ETL development. The ideal candidate will have hands-on expertise in Databricks , Azure Data Factory (ADF) , SQL Server , and PySpark , and be capable of designing and implementing scalable data pipelines. Familiarity with SSIS and Power BI is a plus. Key Responsibilities: Design, build, and maintain scalable data pipelines using Databricks and ADF . Develop and optimize complex SQL queries and stored procedures on SQL Server . Implement data transformation and data wrangling using PySpark . Collaborate with data architects, analysts, and business stakeholders to deliver robust data solutions. Monitor and troubleshoot data workflows to ensure high availability and performance. Document technical processes, data flows, and integration strategies. Work in an Agile/Scrum development environment and participate in sprint planning and code reviews. Support data integration from multiple sources and ensure data quality and governance. Required Skills: 8-12 years of experience in data engineering roles. Strong hands-on experience with Databricks and ADF (Azure Data Factory) . Expertise in SQL Server , including T-SQL development and performance tuning. Solid experience in PySpark for large-scale data processing. Familiarity with DevOps practices and version control tools (e.g., Git). Good analytical and problem-solving skills. Effective communication and collaboration abilities. Good to Have: Experience with SSIS (SQL Server Integration Services). Working knowledge of Power BI for data visualization and reporting. About IGT Solutions: IGT Solutions is a next-gen customer experience (CX) company, defining and delivering transformative experiences for the global and most innovative brands using digital technologies. With the combination of Digital and Human Intelligence, IGT becomes the preferred partner for managing end-to-end CX journeys across Travel and High Growth Tech industries. We have a global delivery footprint, spread across 30 delivery centers in China, Colombia, Egypt, India, Indonesia, Malaysia, Philippines, Romania, South Africa, Spain, UAE, the US, and Vietnam, with 25000+ CX and Technology experts from 35+ nationalities. IGT's Digital team collaborates closely with our customer’s business & technology teams to take solutions faster to market while sustaining quality while focusing on business value and improving overall end-Customer Experience. Our offerings include industry solutions as well as Digital services. We work with leading global enterprise customers to improve synergies between business & technology by enabling rapid business value realization leveraging Digital Technologies. These include lifecycle transformation & rapid development / technology solution delivery services delivered leveraging traditional as well as Digital Technologies, deep functional understanding and software engineering expertise. IGT is ISO 27001:2013, CMMI SVC Level 5 and ISAE-3402 compliant for IT, and COPC® Certified v6.0, ISO 27001:2013 and PCI DSS 3.2 certified for BPO processes. The organization follows Six Sigma rigor for process improvements. It is our policy to provide equal employment opportunities to all individuals based on job-related qualifications and ability to perform a job, without regard to age, gender, gender identity, sexual orientation, race, color, religion, creed, national origin, disability, genetic information, veteran status, citizenship or marital status, and to maintain a non-discriminatory environment free from intimidation, harassment or bias based upon these grounds.
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will be part of iOSYS Software India Pvt. Ltd, a company that specializes in providing Digital Marketing, Mobile Applications, and IT Development Solutions to startups, SMBs, and enterprises. As a leading IT solution developer, we focus on analyzing and understanding your requirements to deliver innovative solutions that add value to your business and provide a world-class experience for end-users. We are dedicated to creating cost-effective solutions in the mobility space to address the evolving challenges in the marketing and technology landscape. As a Data Engineer (Databricks) at iOSYS, you will be responsible for designing and building data pipelines using Spark-SQL and PySpark in Azure Databricks. You will also work on ETL pipelines using Azure Data Factory and maintain a Lakehouse architecture in ADLS/Databricks. Your role will involve data preparation tasks, collaborating with the DevOps team for deployment, and ensuring data processes are controlled and errors are corrected promptly. Additionally, you will participate in global Analytics team projects, collaborate with Data Science and Business Intelligence colleagues, and lead or contribute to various projects within the team. Location options for this position include Gurugram, Bengaluru, and Hyderabad, with a hybrid office mode. We are looking for candidates with 5-8 years of experience, excellent communication skills, proficiency in Pyspark, Azure, ADF, Databricks, ETL, and SQL. It is essential to have expertise in Azure Databricks, Azure Data Factory, PySpark, Spark SQL, and ADLS. Good to have skills include knowledge of Change Management tools and DevOps practices. If you are passionate about data engineering and eager to work in a dynamic environment that encourages innovation and collaboration, we welcome you to apply for this exciting opportunity with iOSYS Software India Pvt. Ltd.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Senior Data Engineer at Veersa, you will utilize your deep expertise in ETL/ELT processes, data warehousing principles, and both real-time and batch data integrations. In this role, you will have the opportunity to mentor junior engineers, establish best practices, and contribute to the overarching data strategy of the company. Your proficiency in SQL, Python, and ideally Airflow and Bash scripting will be instrumental in designing and implementing scalable data integration and pipeline solutions using Azure cloud services. Your key responsibilities will include architecting and implementing data solutions, developing ETL/ELT processes, building and automating data workflows, orchestrating pipelines, and writing Bash scripts for system automation. Collaborating with business and technical stakeholders to understand data requirements and translating them into technical solutions will be a key aspect of your role. Moreover, you will be expected to develop data flows, mappings, quality standards, and validation rules across various systems, ensuring adherence to best practices in data modeling, metadata management, and data governance. To qualify for this role, you must hold a B.Tech or B.E degree in Computer Science, Information Systems, or a related field, along with a minimum of 3 years of experience in data engineering, focusing on Azure-based solutions. Your proficiency in SQL and Python, experience with Airflow and Bash scripting, and proven track record in real-time and batch data integrations will be essential. Hands-on experience with Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks is highly desirable, as well as a strong understanding of data warehousing principles, ETL/ELT methodologies, and data pipeline architecture. In addition, familiarity with data quality, metadata management, and data validation frameworks, coupled with strong problem-solving skills and clear communication abilities, will set you up for success in this role. Preferred qualifications include experience with multi-tenant SaaS data solutions, knowledge of DevOps practices, CI/CD pipelines, version control systems like Git, and a proven ability to mentor and coach other engineers in technical decision-making processes. By joining Veersa as a Senior Data Engineer, you will play a crucial role in driving innovation and delivering cutting-edge technical solutions to clients in the US healthcare industry.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You are a passionate and experienced Oracle ERP Techno-Functional Consultant/Architect who will be responsible for driving the implementation and optimization of critical Oracle ERP solutions in financials, supply chain, and HCM. You will lead complex Oracle Cloud and EBS initiatives contributing to the digital transformation of a dynamic global enterprise. Your key responsibilities will include leading technical delivery for Oracle ERP implementations, designing and implementing scalable solutions across modules, developing integrations using Oracle Integration Cloud and various technologies, delivering reporting solutions, handling technical upgrades and data migrations, collaborating with stakeholders to translate requirements into technical designs, and managing customization and performance tuning. To excel in this role, you should have at least 5 years of hands-on experience with Oracle ERP, expertise in Oracle Cloud Integration, PL/SQL, SQL Tuning, BI/OTBI, ADF, VBCS, SOAP/REST APIs, Oracle Workflow & Personalization, and proven experience leading full-cycle ERP implementations. You should also possess a strong technical architecture background, stakeholder management skills, and familiarity with relevant tools. Additional qualifications such as Oracle Certifications, experience in industries like healthcare, manufacturing, or retail, exposure to Taleo integrations and HCM Extracts would be a bonus. By joining our team, you will be at the forefront of enterprise-wide ERP transformation initiatives, working with seasoned professionals and solution architects, leading global projects, mentoring teams, and benefiting from competitive salary, upskilling programs, and long-term growth opportunities. If you are ready to shape the future of ERP solutions and build smarter, faster, and future-ready systems in Bangalore, apply now and be part of our exciting journey.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
The Data Warehouse Engineer will be responsible for managing and optimizing data processes in an Azure environment using Snowflake. The ideal candidate should have solid SQL skills and a basic understanding of data modeling. Experience with CI/CD processes and Azure ADF is preferred. Additionally, expertise in ETL/ELT frameworks and ER/Studio would be a plus. As a Senior Data Warehouse Engineer, in addition to the core requirements, you will oversee other engineers while also being actively involved in data modeling and Snowflake SQL optimization. You will be responsible for conducting design reviews, code reviews, and deployment reviews with the engineering team. Familiarity with medallion architecture and experience in Healthcare or life sciences industry will be highly advantageous. At Myridius, we are committed to transforming the way businesses operate by offering tailored solutions in AI, data analytics, digital engineering, and cloud innovation. With over 50 years of expertise, we aim to drive organizations through the rapidly evolving landscapes of technology and business. Our integration of cutting-edge technology with deep domain knowledge enables businesses to seize new opportunities, drive significant growth, and maintain a competitive edge in the global market. We go beyond typical service delivery to craft transformative outcomes that help businesses not just adapt, but thrive in a world of continuous change. Discover how Myridius can elevate your business to new heights of innovation by visiting us at www.myridius.com and start leading the change.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
You are a Data Engineer with 3-7 years of experience, currently based in Mumbai and available for face-to-face interaction. Your responsibilities will include building and managing data pipelines using Snowflake and Azure Data Factory (ADF), writing optimized SQL for large-scale data analysis, and monitoring and enhancing Snowflake performance. To excel in this role, you should have a strong background in data engineering and SQL with a focus on Snowflake and ADF. Additionally, familiarity with data quality, governance, and Python will be beneficial. Possessing a Snowflake certification will be considered a plus. If you meet these requirements and are passionate about working in a dynamic environment, where you can utilize your skills in Snowflake and SQL, we encourage you to apply for this position. Please send your CV to shruthi.pu@andortech.com to be considered for this opportunity. #Hiring #DataEngineer #Snowflake #AzureDataFactory #SQL #NowHiring #JobOpening,
Posted 2 weeks ago
3.0 - 6.0 years
0 Lacs
Thane, Maharashtra, India
On-site
Your responsibilities We are seeking a highly experienced Senior Data Engineer with expertise in Azure-based cloud architecture to join our team. In this role, you will design, build, and optimize complex data pipelines and cloud infrastructure to support data-driven business decisions. You’ll be responsible for implementing robust, scalable solutions leveraging Azure Synapse Analytics, Databricks, ADF, SQL with DevOps. The ideal candidate will also possess experience in Power BI, data mining, data analysis, and migration of on-premises systems to the cloud. Familiarity with Microsoft Fabric is an added advantage. Key Responsibilities Cloud Architecture & Infrastructure: Design and implement Azure-based cloud infrastructure, including data storage, processing, and analytics components. Develop and optimize scalable data architectures to ensure high performance and availability. Data Pipeline Development: Build and manage ETL/ELT pipelines using Azure Synapse, Azure Data Factory (ADF), and Databricks. Ensure the efficient flow of data from source to target systems, implementing robust data quality controls. Data Transformation & Analysis: Utilize SQL, Synapse, and Databricks for data transformation, data mining, and advanced data analysis. Implement best practices for data governance, lineage, and security within Azure environments. Migration Projects: Lead the migration of on-premises data systems to Azure cloud infrastructure, ensuring minimal disruption and data integrity. Optimize data migration strategies and methodologies for various applications and workloads. DevOps & CI/CD Pipelines: Manage DevOps processes, ensuring continuous integration and deployment (CI/CD) for data solutions. Develop and maintain infrastructure as code (IaC) for deployment, testing, and monitoring. Business Intelligence & Reporting: Collaborate with business stakeholders to design and implement reporting solutions in Power BI, ensuring data is accessible and actionable. Develop visualizations and dashboards to support data-driven decision-making. Collaboration & Best Practices: Work closely with data scientists, analysts, and other business stakeholders to understand requirements and provide optimized data solutions. Drive best practices in data engineering, including coding standards, testing, version control, and documentation. Your profile Candidate must have 3 - 6 years of experience Education: Bachelor’s or master’s degree in computer science, Information Technology, Data Engineering, or related field. Technical Expertise: Azure Cloud: Advanced proficiency in Azure services, including Synapse Analytics, Data Factory, Databricks, SQL, Blob Storage, and Data Lake. Data Engineering: Strong skills in SQL, ETL/ELT processes, data warehousing, and data modeling. DevOps: Experience with CI/CD pipeline setup, automation, and Azure DevOps. Data Analysis & BI: Proficient in data analysis and visualization using Power BI; experience in data mining techniques is desirable. Migration Experience: Proven track record of migrating on-premises systems to Azure cloud. Additional Skills: Knowledge of Microsoft Fabric is a plus. Familiarity with Infrastructure as Code (IaC) tools like ARM templates or Terraform. Strong understanding of data governance, security, and compliance best practices. Soft Skills: Strong problem-solving and analytical skills. Excellent communication skills, with the ability to collaborate effectively across teams. Ability to manage multiple priorities and work independently in a dynamic environment. Work location: Thane (Mumbai) Your benefits Company Home - thyssenkrupp Materials Services (thyssenkrupp-materials-services.com) Contact Vinit Poojary - tkmits-in-recruitment@thyssenkrupp-materials.com
Posted 2 weeks ago
5.0 - 10.0 years
6 - 16 Lacs
Vadodara
Work from Office
We are seeking an experienced Senior Data Engineer with minimum 5 years of hands-on experience to join our dynamic data team. The ideal candidate will have strong expertise in Microsoft Fabric, demonstrate readiness to adopt cutting-edge tools like SAP Data Sphere, and possess foundational AI knowledge to guide our data engineering initiatives. Key Roles and Responsibilities: Design, develop, and maintain scalable data pipelines and ETL/ELT processes using Microsoft Fabric tools such as Azure Data Factory (ADF) and Power BI. Work on large-scale data processing and analytics using PySpark. Evaluate and implement new data engineering tools like SAP Data Sphere through training or self-learning. Support business intelligence, analytics, and AI/ML initiatives by building robust data architectures. Apply AI techniques to automate workflows and collaborate with data scientists on machine learning projects. Mentor junior data engineers and lead data-related projects across departments. Coordinate with business teams, vendors, and technology partners for smooth project delivery. Create dashboards and reports using tools like Power BI or Tableau, ensuring data accuracy and accessibility. Support self-service analytics across business units and maintain consistency in all visualizations. Experience & Technical Skills 5+ years of professional experience in data engineering with expertise in Microsoft Fabric components Strong proficiency in PySpark for large-scale data processing and distributed computing (MANDATORY) Extensive experience with Azure Data Factory (ADF) for orchestrating complex data workflows (MANDATORY) Proficiency in SQL and Python for data processing and pipeline development Strong understanding of cloud data platforms, preferably Azure ecosystem Experience in data modelling , data warehousing , and modern data architecture patterns Interested candidates can share their updated profiles at "itcv@alembic.co.in"
Posted 2 weeks ago
3.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Databricks Developer Location: Pune, Maharashtra (Hybrid – 2 to 3 days from office) Experience: 3 to 10 Years Employment Type: Full-Time / Contract Notice Period: Immediate to 30 Days Preferred About the Role: We are seeking a highly skilled and motivated Databricks Developer to join our dynamic data engineering team in Chennai. As a Databricks expert, you will be responsible for designing, developing, and maintaining robust big data pipelines and solutions using Databricks, Spark, and other modern data technologies. Key Responsibilities: Design and develop scalable data pipelines using Databricks , Apache Spark , and Delta Lake . Implement ETL/ELT workflows for processing large volumes of structured and unstructured data. Collaborate with data scientists, analysts, and stakeholders to define data models and deliver business insights. Optimize queries and performance on big data platforms. Integrate data from various sources including Azure Data Lake , SQL , and NoSQL systems. Build reusable code and libraries for future use in the data pipeline framework. Maintain and enhance data governance, quality, and security standards. Troubleshoot and resolve technical issues and support production pipelines. Required Skills & Experience: 3 to 10 years of experience in data engineering or big data development . Strong hands-on experience in Databricks and Apache Spark (PySpark preferred). Proficient in Python and/or Scala for data processing. Solid understanding of data warehousing concepts , data lakes , and data lakehouse architecture . Experience working with Azure Cloud services (ADF, ADLS, Synapse) or AWS Glue/EMR is a plus. Strong experience in SQL and performance tuning of queries. Experience in CI/CD integration and version control (e.g., Git, Azure DevOps). Good understanding of Delta Lake , MLFlow , and Notebooks . Nice to Have: Databricks certification (Developer or Data Engineer Associate/Professional). Knowledge of streaming frameworks like Kafka or Structured Streaming. Exposure to Airflow , Azure Data Factory , or similar orchestration tools.
Posted 2 weeks ago
5.0 - 10.0 years
0 Lacs
Telangana
On-site
About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Position Details Role : MLops Engineer Engineer Experience : 5-10 Years Mandatory Skill: Python/MLOps/Docker and Kubernetes/FastAPI or Flask/CICD/Jenkins/Spark/SQL/RDB/Cosmos/Kafka/ADLS/API/Databricks Location: Bangalore Notice Period: less than 60 Days Job Description: Other Skills: Azure/LLMOps/ADF/ETL We are seeking a talented and passionate Machine Learning Engineer to join our team and play a pivotal role in developing and deploying cutting-edge machine learning solutions. You will work closely with other engineers and data scientists to bring machine learning models from proof-of-concept to production, ensuring they deliver real-world impact and solve critical business challenges. Collaborate with data scientists, model developers, software engineers, and other stakeholders to translate business needs into technical solutions. Experience of having deployed ML models to production Create high performance real-time inferencing APIs and batch inferencing pipelines to serve ML models to stakeholders. Integrate machine learning models seamlessly into existing production systems. Continuously monitor and evaluate model performance and retrain the models automatically or periodically Streamline existing ML pipelines to increase throughput. Identify and address security vulnerabilities in existing applications proactively. Design, develop, and implement machine learning models for preferably insurance related applications. Well versed with Azure ecosystem Knowledge of NLP and Generative AI techniques. Relevant experience will be a plus. Knowledge of machine learning algorithms and libraries (e.g., TensorFlow, PyTorch) will be a plus. Stay up-to-date on the latest advancements in machine learning and contribute to ongoing innovation within the team. · Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers tbd
Posted 2 weeks ago
2.0 - 5.0 years
6 - 9 Lacs
Hyderābād
On-site
Data Engineer, DT US PxE The Data Engineer is an integral part of the technical application development team and primarily responsible for analyze, plan, design, develop, and implement the Azure Data engineering solutions to meet strategic, usability, performance, reliability, control, and security requirements of Data science processes. Requires demonstrable knowledge in areas of Data engineering, AI/ML, Data warehouse and reporting applications. Must be innovative. Work you will do A unique opportunity to be a part of growing team that works on a premier unified data science analytics platform within Deloitte. You will be responsible for implementing/delivering/supporting Data engineering and AI/ML solutions to support the Deloitte US Member Firm. Outcome-Driven Accountability Collaborate with business and IT leaders to develop and refine ideas for integrating predictive and prescriptive analytics within business processes, ensuring measurable customer and business outcomes. Decompose complex business problems into manageable components, facilitating the use of multiple analytic modeling methods for holistic and valuable solutions. Develop and refine prototypes and proofs of concepts, presenting results to business and IT leaders, and demonstrating the impact on customer needs and business outcomes. Technical Leadership and Advocacy Engage in data analysis, generating and testing hypotheses, preparing and analyzing historical data, identifying patterns, and applying statistical methods to formulate solutions that deliver high-quality outcomes. Develop project plans, including resource needs and task dependencies, to meet project deliverables with a focus on incremental and iterative delivery. Engineering Craftsmanship Participate in defining project scope, objectives, and quality controls for new projects, ensuring alignment with customer-centric engineering principles. Present and communicate project deliverable results, emphasizing the value delivered to customers and the business. Customer-Centric Engineering Assist in recruiting and mentoring team members, fostering a culture of engineering craftsmanship and continuous learning. Incremental and Iterative Delivery Stay abreast of changes in technology, leading new technology evaluations for predictive and statistical analytics, and advocating for innovative, lean, and feasible solutions. Education: Bachelor’s degree in Computer Science or Business Information Systems or MCA or equivalent degree. Qualifications: 2 to 5 years Advanced Level of experience in Azure Data engineering Expertise in Development, deployment and monitoring ADF pipelines (using visual studio and browsers) Expertise in Azure databricks internal programming using (PySpark, SparkR and SparkSQL) or Amazon EMR (Elastic MapReduce). Expertise in managing azure storage (Azure Datalake Gen2, Azure Blob Storage, Azure SQL database) or Azure Blob Storage, Azure Data Lake Storage, Azure Synapse Analytics, Azure Data Factory Advanced programming skills in Python, R and SQL (SQL for HANA, MS SQL) Hands on experience in Visualization tools (Tableau / PowerBI) Hands on experience in Data science studios like (Dataiku, Azure ML studio, Amazon SageMaker) The Team Information Technology Services (ITS) helps power Deloitte’s success. ITS drives Deloitte, which serves many of the world’s largest, most respected organizations. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. The ~3,000 professionals in ITS deliver services including: Security, risk & compliance Technology support Infrastructure Applications Relationship management Strategy Deployment PMO Financials Communications Product Engineering (PxE) Product Engineering (PxE) is the internal software and applications development team responsible for delivering leading-edge technologies to Deloitte professionals. Their broad portfolio includes web and mobile productivity tools that empower our people to log expenses, enter timesheets, book travel and more, anywhere, anytime. PxE enables our client service professionals through a comprehensive suite of applications across the business lines. In addition to application delivery, PxE offers full-scale design services, a robust mobile portfolio, cutting-edge analytics, and innovative custom development. How you will grow At Deloitte, we have invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in exactly the same way. So, we provide a range of resources, including live classrooms, team-based learning, and eLearning. Deloitte University (DU): The Leadership Center in India, our state-of-the-art, world-class learning center in the Hyderabad office, is an extension of the DU in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people, and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. Disclaimer: Please note that this description is subject to change basis business/engagement requirements and at the discretion of the management. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India. Benefits to help you thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 302720
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough