Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a hands-on and motivated Azure DataOps Engineer to support our cloud-based data operations and workflows. This role is ideal for someone with strong foundational knowledge of Azure data services and data pipelines who is looking to grow in a fast-paced environment. You will work closely with senior engineers and analysts to manage data pipelines, ensure data quality, and assist in deployment and monitoring activities. Your Key Responsibilities Support the execution and monitoring of Azure Data Factory (ADF) pipelines and Azure Synapse workloads. Assist in maintaining data in Azure Data Lake and troubleshoot ingestion and access issues. Collaborate with the team to support Databricks notebooks and manage small transformation tasks. Perform ETL operations and ensure timely and accurate data movement between systems. Write and debug intermediate-level SQL queries for data validation and issue analysis. Monitor pipeline health using Azure Monitor and Log Analytics, and escalate issues as needed. Support deployment activities using Azure DevOps pipelines. Maintain and update SOPs, assist in documenting known issues and recurring tasks. Participate in incident management and contribute to resolution and knowledge sharing. Skills And Attributes For Success Strong understanding of cloud-based data workflows, especially in Azure environments. Analytical mindset with the ability to troubleshoot data pipeline and transformation issues. Comfortable working with large datasets and navigating both structured and semi-structured data. Ability to follow runbooks, SOPs, and collaborate effectively with other technical teams. Willingness to learn new technologies and adapt in a dynamic environment. Good communication skills to interact with stakeholders, document findings, and share updates. Discipline to work independently, manage priorities, and escalate issues responsibly. To qualify for the role, you must have 2–3 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Working knowledge of Azure Data Factory, Data Lake, and Synapse Exposure to Azure Databricks – ability to understand and run existing notebooks Understanding of ETL processes and data flow concepts Good to have Experience with Power BI or Tableau for basic reporting and data visualization Exposure to Informatica CDI or any other data integration platform Basic scripting knowledge in Python or PySpark for data processing or automation tasks Proficiency in writing SQL for querying and analyzing structured data Familiarity with Azure Monitor and Log Analytics for pipeline monitoring Experience supporting DevOps deployments or familiarity with Azure DevOps concepts. What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Data Engineer- ETL Bangalore, India AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable - enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained industrious advantage. Our Chief Data Office also known as our Innovation, Data Intelligence & Analytics team (IDA) is focused on driving innovation through optimizing how we leverage data to drive strategy and create a new business model - disrupting the insurance market. As we develop an enterprise-wide data and digital strategy that moves us toward greater focus on the use of data and data-driven insights, we are seeking a Data Engineer. The role will support the team’s efforts towards creating, enhancing, and stabilizing the Enterprise data lake through the development of the data pipelines. This role requires a person who is a team player and can work well with team members from other disciplines to deliver data in an efficient and strategic manner. What You’ll Be Doing What will your essential responsibilities include? Act as a data engineering expert and partner to Global Technology and data consumers in controlling complexity and cost of the data platform, whilst enabling performance, governance, and maintainability of the estate. Understand current and future data consumption patterns, architecture (granular level), partner with Architects to make sure optimal design of data layers. Apply best practices in Data architecture. For example, balance between materialization and virtualization, optimal level of de-normalization, caching and partitioning strategies, choice of storage and querying technology, performance tuning. Leading and hands-on execution of research into new technologies. Formulating frameworks for assessment of new technology vs business benefit, implications for data consumers. Act as a best practice expert, blueprint creator of ways of working such as testing, logging, CI/CD, observability, release, enabling rapid growth in data inventory and utilization of Data Science Platform. Design prototypes and work in a fast-paced iterative solution delivery model. Design, Develop and maintain ETL pipelines using Py spark in Azure Databricks using delta tables. Use Harness for deployment pipeline. Monitor Performance of ETL Jobs, resolve any issue that arose and improve the performance metrics as needed. Diagnose system performance issue related to data processing and implement solution to address them. Collaborate with other teams to make sure successful integration of data pipelines into larger system architecture requirement. Maintain integrity and quality across all pipelines and environments. Understand and follow secure coding practice to make sure code is not vulnerable. You will report to the Application Manager. What You Will BRING We’re looking for someone who has these abilities and skills: Required Skills And Abilities Effective Communication skills. Bachelor’s degree in computer science, Mathematics, Statistics, Finance, related technical field, or equivalent work experience. Relevant years of extensive work experience in various data engineering & modeling techniques (relational, data warehouse, semi-structured, etc.), application development, advanced data querying skills. Relevant years of programming experience using Databricks. Relevant years of experience using Microsoft Azure suite of products (ADF, synapse and ADLS). Solid knowledge on network and firewall concepts. Solid experience writing, optimizing and analyzing SQL. Relevant years of experience with Python. Ability to break complex data requirements and architect solutions into achievable targets. Robust familiarity with Software Development Life Cycle (SDLC) processes and workflow, especially Agile. Experience using Harness. Technical lead responsible for both individual and team deliveries. Desired Skills And Abilities Worked in big data migration projects. Worked on performance tuning both at database and big data platforms. Ability to interpret complex data requirements and architect solutions. Distinctive problem-solving and analytical skills combined with robust business acumen. Excellent basics on parquet files and delta files. Effective Knowledge of Azure cloud computing platform. Familiarity with Reporting software - Power BI is a plus. Familiarity with DBT is a plus. Passion for data and experience working within a data-driven organization. You care about what you do, and what we do. Who WE Are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and a diverse workforce enable business growth and are critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most diverse workforce possible, and create an inclusive culture where everyone can bring their full selves to work and can reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe Robust support for Flexible Working Arrangements Enhanced family friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides dynamic compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability.
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description A Data Engineer Extraordinaire will possess masterful proficiency in crafting scalable and efficient solutions for data processing and analysis. With expertise in database management, ETL processes, and data modelling, they design robust pipelines using cutting-edge technologies such as Apache Spark and Hadoop. Their proficiency extends to cloud platforms like AWS, Azure, or Google Cloud Platform, where they leverage scalable resources to build resilient data ecosystems. This exceptional individual possesses a deep understanding of business requirements, collaborating closely with stakeholders to ensure that data infrastructure aligns with organizational objectives. Through their technical acumen and innovative spirit, they pave the way for data-driven insights and empower organizations to thrive in the digital age Key Responsibilities Develop and maintain cutting-edge data pipeline architecture, ensuring optimal performance and scalability. Building seamless ETL pipeline for diverse sources leveraging advanced big data technologies Craft advanced analytics tools that leverage the robust data pipeline, delivering actionable insights to drive business decisions Prototype and iterate test solutions for identified functional and technical challenges, driving innovation and problem-solving Champion ETL best practices and standards, ensuring adherence to industry-leading methodologies Collaborate closely with stakeholders across Executive, Product, Data, and Design teams, addressing data-related technical challenges and supporting their infrastructure needs Thrive in a dynamic, cross-functional environment, working collaboratively to drive innovation and deliver impactful solutions Required Skills and Qualifications Proficient in SQL, Python, Spark, and data transformation techniques Experience with Cloud Platforms AWS, Azure, or Google Cloud (GCP) for deploying and managing data services Data Orchestration Proficient in using orchestration tools such as Apache Airflow, Azure Data Factory (ADF), or similar tools for managing complex workflows Data Platform Experience Hands-on experience with Databricks or similar platforms for data engineering workloads Familiarity with Data Lakes and Warehouses Experience working with data lakes, data warehouses (Redshift/SQL Server/Big Query), and big data processing architectures Version Control & CI/CD Proficient in Git, GitHub, or similar version control systems, and comfortable working with CI/CD pipelines Data Security Knowledge of data governance, encryption, and compliance practices within cloud environments Problem-solving Analytical thinking and problem-solving mindset, with a passion for optimizing data workflows Preferred Skills and Qualifications Bachelor's degree or equivalent degrees in computer science, Engineering, or a related field 3+ years of experience in data engineering or related roles Hands-on experience with distributed computing and parallel data processing Good to Have Streaming Tools Experience with Kafka, Event Hubs, Amazon SQS, or equivalent streaming technologies Experience in Containerization Familiarity with Docker and Kubernetes for deploying scalable data solutions Engage in peer review processes and present research findings at esteemed ML/AI conferences such as NIPS, ICML, AAAI and COLT Experiment with latest advancements in Data Engineering tools, platforms, and methodologies. Mentor peers and junior members and handle multiple projects at the same time Participate and speak at various external forums such as research conferences and technical summits Promote and support company policies, procedures, mission, values, and standards of ethics and integrity Certifications in AWS, Azure, or GCP are a plus Understanding of modern data architecture patterns, including the Lambda and Kappa architectures
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Pinnacle Pinnacle is a leader in providing innovative workforce solutions, dedicated to optimizing talent acquisition and management processes. Our commitment to excellence has earned us the trust of businesses looking to enhance their talent strategies. We cultivate a dynamic and collaborative environment that empowers our employees to excel and contribute to our clients' success. Job Summary We are seeking a highly skilled ETL Developer to join our team in Chennai. The ideal candidate will be responsible for designing, developing, and maintaining ETL processes, as well as data warehouse design and modeling, to support our data integration and business intelligence initiatives. This role requires proficiency in T-SQL, Azure Data Factory (ADF), and SSIS, along with excellent problem-solving and communication skills. Responsibilities Design, develop, and maintain ETL processes to support data integration and business intelligence initiatives. Utilize T-SQL to write complex queries and stored procedures for data extraction and transformation. Implement and manage ETL processes using SSIS (SQL Server Integration Services). Design and model data warehouses to support reporting and analytics needs. Ensure data accuracy, quality, and integrity through effective testing and validation procedures. Collaborate with business analysts and stakeholders to understand data requirements and deliver solutions that meet their needs. Monitor and troubleshoot ETL processes to ensure optimal performance and resolve any issues promptly. Document ETL processes, workflows, and data mappings to ensure clarity and maintainability. Stay current with industry trends and best practices in ETL development, data integration, and data warehousing. Must Haves Proven experience as an ETL Developer or in a similar role. Proficiency in T-SQL for writing complex queries and stored procedures. Experience with SSIS (SQL Server Integration Services) for developing and managing ETL processes. Knowledge of ADF (Azure Data Factory) and its application in ETL processes. Experience in data warehouse design and modeling. Knowledge of Microsoft's Azure cloud suite, including Data Factory, Data Storage, Blob Storage, Power BI, and Power Automate. Strong problem-solving and analytical skills. Excellent communication and interpersonal skills. Strong attention to detail and commitment to data quality. Bachelor's degree in Computer Science, Information Technology, or a related field is preferred. Timing: (5:30 - 2:30 For 3-6 months) Normal Time after 3-6 months - 2:00 - 11:00 IST Location: Chennai (Perungudi) Hybrid 3 days
Posted 1 week ago
7.0 years
20 Lacs
India
On-site
Job Description : EXP : 7 Years Location : Hyderabad We are seeking a skilled and dynamic Azure Data Engineer to join our growing data engineering team. The ideal candidate will have a strong background in building and maintaining data pipelines and working with large datasets on the Azure cloud platform. The Azure Data Engineer will be responsible for developing and implementing efficient ETL processes, working with data warehouses, and leveraging cloud technologies such as Azure Data Factory (ADF), Azure Databricks, PySpark, and SQL to process and transform data for analytical purposes. Key Responsibilities : - Data Pipeline Development : Design, develop, and implement scalable, reliable, and high-performance data pipelines using Azure Data Factory (ADF), Azure Databricks, and PySpark. - Data Processing : Develop complex data transformations, aggregations, and cleansing processes using PySpark and Databricks for big data workloads. - Data Integration : Integrate and process data from various sources such as databases, APIs, cloud storage (e.g., Blob Storage, Data Lake), and third-party services into Azure Data Services. - Optimization : Optimize data workflows and ETL processes to ensure efficient data loading, transformation, and retrieval while ensuring data integrity and high performance. - SQL Development : Write complex SQL queries for data extraction, aggregation, and transformation. Maintain and optimize relational databases and data warehouses. - Collaboration : Work closely with data scientists, analysts, and other engineering teams to understand data requirements and design solutions that meet business and analytical needs. - Automation & Monitoring : Implement automation for data pipeline deployment and ensure monitoring, logging, and alerting mechanisms are in place for pipeline health. - Cloud Infrastructure Management : Work with cloud technologies (e.g., Azure Data Lake, Blob Storage) to store, manage, and process large datasets. - Documentation & Best Practices : Maintain thorough documentation of data pipelines, workflows, and best practices for data engineering solutions. Job Type: Full-time Pay: Up to ₹2,000,000.00 per year Work Location: In person
Posted 1 week ago
3.0 - 8.0 years
0 Lacs
Delhi
Remote
Role: MS Fabric Remote Budget : 1lpm Experience : 3-8 yrs Role Description We are looking for an experienced Data Engineer/BI Developer with strong hands-on expertise in Microsoft Fabric technologies, including OneLake, Lakehouse, Data Lake, Warehouse, and Real-Time Analytics, along with proven skills in Power BI, Azure Synapse Analytics, and Azure Data Factory (ADF). The ideal candidate should also possess working knowledge of DevOps practices for data engineering and deployment automation. Key Responsibilities: Design and implement scalable data solutions using Microsoft Fabric components: OneLake, Data Lake, Lakehouse, Warehouse, and Real-Time Analytics Build and manage end-to-end data pipelines integrating structured and unstructured data from multiple sources. Integrate Microsoft Fabric with Power BI, Synapse Analytics, and Azure Data Factory to enable modern data analytics solutions. Develop and maintain Power BI datasets, dashboards, and reports using data from Fabric Lakehouses or Warehouses. Implement data governance, security, and compliance policies within the Microsoft Fabric ecosystem. Collaborate with stakeholders for requirements gathering, data modeling, and performance tuning. Leverage Azure DevOps / Git for version control, CI/CD pipelines, and deployment automation of data artifacts. Monitor, troubleshoot, and optimize data flows and transformations for performance and reliability. Required Skills: 3–8 years of experience in data engineering, BI development, or similar roles. Strong hands-on experience with Microsoft Fabric ecosystem:OneLake, Data Lake, Lakehouse, Warehouse, Real-Time Analytics Proficient in Power BI for interactive reporting and visualization. Experience with Azure Synapse Analytics, ADF (Azure Data Factory), and related Azure services. Good understanding of data modeling, SQL, T-SQL, and Spark/Delta Lake concepts. Working knowledge of DevOps tools and CI/CD processes for data deployment (Azure DevOps preferred). Familiarity with DataOps and version control practices for data solutions. Preferred Qualifications: Microsoft certifications (e.g., DP-203, PL-300, or Microsoft Fabric certifications) are a plus. Experience with Python, Notebooks, or KQL for Real-Time Analytics is advantageous. Knowledge of data governance tools (e.g., Microsoft Purview) is a plus. Job Type: Full-time Pay: ₹50,000.00 - ₹100,000.00 per year Schedule: Day shift Work Location: In person
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview DataOps L3 The role will leverage & enhance existing technologies in the area of data and analytics solutions like Power BI, Azure data engineering technologies, ADLS, ADB, Synapse, and other Azure services. The role will be responsible for developing and support IT products and solutions using these technologies and deploy them for business users Responsibilities 5 to 10 Years of IT & Azure Data engineering technologies experience Prior experience in ETL, data pipelines, data flow techniques using Azure Data Services Working experience in Python, Py Spark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse and file formats like JSON & Parquet. Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform and enrich data sets. Development experience in orchestration of pipelines Good understanding about SQL, Databases, Datawarehouse systems preferably Teradata Experience in deployment and monitoring techniques. Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge on Datawarehouse concepts and Datawarehouse modelling. Working knowledge of SNOW including resolving incidents, handling Change requests /Service requests, reporting on metrics to provide insights. Collaborate with the project team to understand tasks to model tables using data warehouse best practices and develop data pipelines to ensure the efficient delivery of data. Strong expertise in performance tuning and optimization of data processing systems. Proficient in Azure Data Factory, Azure Databricks, Azure SQL Database, and other Azure data services. Develop and enforce best practices for data management, including data governance and security. Work closely with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Proficient in implementing DataOps framework. Qualifications Azure data factory Azure Databricks Azure Synapse PySpark/SQL ADLS Azure DevOps with CI/CD implementation. Nice-to-Have Skill Sets: Business Intelligence tools (preferred—Power BI) DP-203 Certified.
Posted 1 week ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Role Proficiency: This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required. Outcomes Act creatively to develop pipelines/applications by selecting appropriate technical options optimizing application development maintenance and performance through design patterns and reusing proven solutions. Support the Project Manager in day-to-day project execution and account for the developmental activities of others. Interpret requirements create optimal architecture and design solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code using best standards debug and test solutions to ensure best-in-class quality. Tune performance of code and align it with the appropriate infrastructure understanding cost implications of licenses and infrastructure. Create data schemas and models effectively. Develop and manage data storage solutions including relational databases NoSQL databases Delta Lakes and data lakes. Validate results with user representatives integrating the overall solution. Influence and enhance customer satisfaction and employee engagement within project teams. Measures Of Outcomes TeamOne's Adherence to engineering processes and standards TeamOne's Adherence to schedule / timelines TeamOne's Adhere to SLAs where applicable TeamOne's # of defects post delivery TeamOne's # of non-compliance issues TeamOne's Reduction of reoccurrence of known defects TeamOne's Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). TeamOne's Average time to detect respond to and resolve pipeline failures or data issues. TeamOne's Number of data security incidents or compliance breaches. Code Outputs Expected: Develop data processing code with guidance ensuring performance and scalability requirements are met. Define coding standards templates and checklists. Review code for team and peers. Documentation Create/review templates checklists guidelines and standards for design/process/development. Create/review deliverable documents including design documents architecture documents infra costing business requirements source-target mappings test cases and results. Configure Define and govern the configuration management plan. Ensure compliance from the team. Test Review/create unit test cases scenarios and execution. Review test plans and strategies created by the testing team. Provide clarifications to the testing team. Domain Relevance Advise data engineers on the design and development of features and components leveraging a deeper understanding of business needs. Learn more about the customer domain and identify opportunities to add value. Complete relevant domain certifications. Manage Project Support the Project Manager with project inputs. Provide inputs on project plans or sprints as needed. Manage the delivery of modules. Manage Defects Perform defect root cause analysis (RCA) and mitigation. Identify defect trends and implement proactive measures to improve quality. Estimate Create and provide input for effort and size estimation and plan resources for projects. Manage Knowledge Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Execute and monitor the release process. Design Contribute to the creation of design (HLD LLD SAD)/architecture for applications business components and data models. Interface With Customer Clarify requirements and provide guidance to the Development Team. Present design options to customers. Conduct product demos. Collaborate closely with customer architects to finalize designs. Manage Team Set FAST goals and provide feedback. Understand team members' aspirations and provide guidance and opportunities. Ensure team members are upskilled. Engage the team in projects. Proactively identify attrition risks and collaborate with BSE on retention measures. Certifications Obtain relevant domain and technology certifications. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning. Experience in data warehouse design and cost improvements. Apply and optimize data models for efficient storage retrieval and processing of large datasets. Communicate and explain design/development aspects to customers. Estimate time and resource requirements for developing/debugging features/components. Participate in RFP responses and solutioning. Mentor team members and guide them in relevant upskilling and certification. Knowledge Examples Knowledge Examples Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF. Proficient in SQL for analytics and windowing functions. Understanding of data schemas and models. Familiarity with domain-related data. Knowledge of data warehouse optimization techniques. Understanding of data security concepts. Awareness of patterns frameworks and automation practices. Additional Comments We are seeking a highly experienced Senior Data Engineer to design, develop, and optimize scalable data pipelines in a cloud-based environment. The ideal candidate will have deep expertise in PySpark, SQL, Azure Databricks, and experience with either AWS or GCP. A strong foundation in data warehousing, ELT/ETL processes, and dimensional modeling (Kimball/star schema) is essential for this role. Must-Have Skills 8+ years of hands-on experience in data engineering or big data development. Strong proficiency in PySpark and SQL for data transformation and pipeline development. Experience working in Azure Databricks or equivalent Spark-based cloud platforms. Practical knowledge of cloud data environments – Azure, AWS, or GCP. Solid understanding of data warehousing concepts, including Kimball methodology and star/snowflake schema design. Proven experience designing and maintaining ETL/ELT pipelines in production. Familiarity with version control (e.g., Git), CI/CD practices, and data pipeline orchestration tools (e.g., Airflow, Azure Data Factory Skills Azure Data Factory,Azure Databricks,Pyspark,Sql
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Responsibilities ETL Development, Testing, and Management: Develop and manage ETLs for IFP model runs and result retrieval. Design dataflows, establish checks and controls, and validate ETL processes. Skills required: Knowledge of IDM, ADF, IFP, Integrate App, MG-ALFA, SQL, Power BI, AWS, and basic scripting in PowerShell. Criticality: High. Approx time efforts: 90 hours per month. Data Input and Updates: Process monthly recurring input updates, including addition or removal of inputs, Schema changes, Metadata changes, etc. Manage additional updates accounting for model valuation enhancements. Perform testing and deviation analysis. Skills required: Familiarity with key assumptions, policy attributes, and data aggregation for different purposes. Criticality: High. Approx time efforts: 90 hours per month. Reporting and Dashboards: Extract daily and job compute reports. Segment records by model, project, and business unit levels. Publish Power BI dashboards with data segregations for analysis. Skills required: Proficiency in Excel, Power BI, and SQL. Criticality: Medium. Approx time efforts: 2 to 4 hours per week. Performance Monitoring: Analyze run times and propose changes for reducing it while keeping the original functionality of the ETLs. Clean up the data transforms to make sure the process is streamlined. Skills required: Analytical skills to compare ETL run times and suggest modifications. Criticality: Low. Approx time efforts: 2 to 4 days per month. Independent Projects: Work on various independent projects identified from time to time. Qualifications Professional Background: Prior experience working in an insurance company is an added advantage. Education: Bachelor’s degree in mathematics, Science, Finance, Economics, or any related field. Positive Attitude: Ability to work under pressure while maintaining a positive and proactive approach. Analytical and Research Skills: Strong ability to analyze data, identify issues, and develop solutions. Communication Skills: Excellent verbal and written communication to collaborate with stakeholders effectively.
Posted 1 week ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a highly skilled Azure Data Architect for our client, a leading consulting firm in India. The ideal candidate will have a proven track record in overseeing multiple data warehouse and data lake implementations, possess a deep understanding of data modeling, and be well-versed in driving digital transformation initiatives. Key Skills: Azure Data Engineering, ADF, Databricks, Data Modeling To apply for this position, it is mandatory to register on our platform at www.curataid.com and give 10 minutes technical quiz on Azure Data Engineering/Azure Databricks skill to speed up the shortlisting process. Key Responsibilities Design and implement data architecture solutions aligned with business goals. Develop and maintain data models, dictionaries, and flow diagrams to ensure clarity and governance. Collaborate with business stakeholders to gather requirements and provide technical clarifications. Drive data ingestion, transformation, and summarization pipelines using modern tools and frameworks. Ensure robust data management practices across SQL, NoSQL, and Hadoop ecosystems. Lead data integration and ETL processes for seamless data flow across systems. Build and optimize solutions using Azure Data Services (ADLS, ADF, Synapse). Implement Databricks-based medallion architecture and manage data with Unity Catalog for secure governance. Required Skills & Experience 8+ years of experience in data engineering/architecture. Proven experience as a Data Architect or similar role. Strong expertise in data warehousing, data modeling, and ETL design. Hands-on experience with Azure data engineering tools and Databricks. Deep knowledge of SQL, NoSQL, Hadoop, and modern data lakehouse concepts. Excellent collaboration and communication skills. **Unfortunately, due to high volume of applicants, not everybody will receive a response About Us CuratAId is a hiring platform for the tech industry in India, providing pre-vetted candidates to recruiters. Our expert interviewers screen candidates based on their primary and secondary technical skills, as well as communication and behavioral traits, to provide recruiters with high-quality, qualified candidates. As a tech hiring platform, our goal is to simplify and streamline the recruitment process for both recruiters and job seekers.
Posted 1 week ago
1.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Ready to Build Data That Actually Matters? At Exillar Infotech Pvt Ltd. , we don’t just move data — we move decisions. We’re looking for a Data Engineer who’s equal parts tech wizard and problem solver. If you’re fluent in Python, SQL, Azure, and dreams of scalable pipelines — let’s talk! ⸻ What You’ll Be Doing (aka Your Superpowers): • Build and maintain end-to-end ETL pipelines using ADF & Python • Transform data using PySpark notebooks in Azure Databricks • Design cloud-native architecture with Synapse, Delta Lake, Azure SQL • Optimize queries, procedures, and automate deployments via Azure DevOps • Collaborate across teams and make data cleaner, faster, smarter • Ensure security, performance, and compliance of data systems ⸻ What We’re Looking For: • 1+ years of experience as a Data Engineer • Proficiency in Azure Data Factory, Synapse, Databricks, SQL & Python • Experience with Delta Lake, Snowflake, PostgreSQL • Git, CI/CD, DevOps — we love engineers who automate everything • Strong logic, problem-solving chops & a good sense of data humor ⸻ Why You’ll Love Working With Us Be Part of Something Bigger Join a forward-thinking, automation-driven team that leads with innovation. Grow with the Flow Level up in a data-first space that fuels learning and creativity. Real Work, Real Impact Build powerful systems that drive decisions across industries. Supportive, Not Corporate Flat structure, friendly team, and zero micromanagement. Flex Your Flexibility flexible hours to match your rhythm.
Posted 1 week ago
4.0 - 7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Data Engineer Location: Hyderabad, India Employment Type: Full-time Experience : 4 to 7 Years About NationsBenefits: At NationsBenefits, we are leading the transformation of the insurance industry by developing innovative benefits management solutions. We focus on modernizing complex back-office systems to create scalable, secure, and high-performing platforms that streamline operations for our clients. As part of our strategic growth, we are focused on platform modernization — transitioning legacy systems to modern, cloud-native architectures that support the scalability, reliability, and high performance of core back-office functions in the insurance domain. Position Overview: We are seeking a self-driven Data Engineer with 4–7 years of experience to build and optimize scalable ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. The role involves working across scrum teams to develop data solutions, ensure data governance with Unity Catalog, and support real-time and batch processing. Strong problem-solving skills, T-SQL expertise, and hands-on experience with Azure cloud tools are essential. Healthcare domain knowledge is a plus. Job Description: Work with different scrum teams to develop all the quality database programming requirements of the sprint. Experience in Azure cloud platforms like Advanced Python Programming, Databricks , Azure SQL , Data factory (ADF), Data Lake, Data storage, SSIS. Create and deploy scalable ETL/ELT pipelines with Azure Databricks by utilizing PySpark and SQL . Create Delta Lake tables with ACID transactions and schema evolution to support real-time and batch processing. Experience in Unity Catalog for centralized data governance, access control, and data lineage tracking. Independently analyse, solve, and correct issues in real time, providing problem resolution end-to-end. Develop unit tests to be able to test them automatically. Use SOLID development principles to maintain data integrity and cohesiveness. Interact with product owner and business representatives to determine and satisfy needs. Sense of ownership and pride in your performance and its impact on company’s success. Critical thinker and problem-solving skills. Team player. Good time-management skills. Great interpersonal and communication skills. Mandatory Qualifications: 4-7 years of experience as a Data Engineer. Self-driven with minimal supervision. Proven experience with T-SQL programming, Azure Databricks, Spark (PySpark/Scala), Delta Lake, Unity Catalog, ADLS Gen2 Microsoft TFS, Visual Studio, Devops exposure. Experience with cloud platforms such as Azure or any. Analytical, problem-solving mindset. Preferred Qualifications HealthCare domain knowledge
Posted 1 week ago
8.0 years
0 Lacs
Kochi, Kerala, India
On-site
Job Role Senior Dot Net Developer Experience 8+ years Max Notice period Immediate Location Trivandrum / Kochi *Introduction Candidates with 8+ years of experience in IT industry and with strong .Net/.Net Core/Azure Cloud Service/ Azure DevOps. This is a client facing role and hence should have strong communication skills. *This is for a US client and the resource should be hands-on - experience in coding and Azure Cloud. *Working hours - 8 hours , with a 4 hours of overlap during EST Time zone. ( 12 PM - 9 PM) This overlap hours is mandatory as meetings happen during this overlap hours Responsibilities include: • Design, develop, enhance, document, and maintain robust applications using .NET Core 6/8+, C#, REST APIs, T-SQL, and modern JavaScript/jQuery • Integrate and support third-party APIs and external services • Collaborate across cross-functional teams to deliver scalable solutions across the full technology stack • Identify, prioritize, and execute tasks throughout the Software Development Life Cycle (SDLC) • Participate in Agile/Scrum ceremonies and manage tasks using Jira • Understand technical priorities, architectural dependencies, risks, and implementation challenges • Troubleshoot, debug, and optimize existing solutions with a strong focus on performance and reliability • Certifications : • Microsoft Certified: Azure Fundamentals • Microsoft Certified: Azure Developer Associate • Other relevant certifications in Azure, .NET, or Cloud technologies Primary Skills : 8+ years of hands-on development experience with: • C#, .NET Core 6/8+, Entity Framework / EF Core • JavaScript, jQuery, REST APIs • Expertise in MS SQL Server, including: • Complex SQL queries, Stored Procedures, Views, Functions, Packages, Cursors, Tables, and Object Types • Skilled in unit testing with XUnit, MSTest • Strong in software design patterns, system architecture, and scalable solution design • Ability to lead and inspire teams through clear communication, technical mentorship, and ownership • Strong problem-solving and debugging capabilities • Ability to write reusable, testable, and efficient code • Develop and maintain frameworks and shared libraries to support large-scale applications • Excellent technical documentation, communication, and leadership skills • Microservices and Service-Oriented Architecture (SOA) • Experience in API Integrations 2+ years of hands with Azure Cloud Services, including: • Azure Functions • Azure Durable Functions • Azure Service Bus, Event Grid, Storage Queues • Blob Storage, Azure Key Vault, SQL Azure • Application Insights, Azure Monitoring Secondary Skills: • Familiarity with AngularJS, ReactJS, and other front-end frameworks • Experience with Azure API Management (APIM) • Knowledge of Azure Containerization and Orchestration (e.g., AKS/Kubernetes) • Experience with Azure Data Factory (ADF) and Logic Apps • Exposure to Application Support and operational monitoring • Azure DevOps - CI/CD pipelines (Classic / YAML)
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
Working with data on a day-to-day basis excites you, and you are interested in building robust data architecture to identify data patterns and optimize data consumption for customers who will forecast and predict actions based on data. If this excites you, then working in our intelligent automation team at Schneider AI Hub is the perfect fit for you. As a Lead Data Engineer at Schneider AI Hub, you will play a crucial role in the AI transformation of Schneider Electric by developing AI-powered solutions. Your responsibilities will include expanding and optimizing data and data pipeline architecture, ensuring optimal data flow and collection for cross-functional teams, and supporting software engineers, data analysts, and data scientists on data initiatives. You will be responsible for creating and maintaining optimal data pipeline architecture, designing the right schema to support functional requirements, and building production data pipelines from ingestion to consumption. Additionally, you will create preprocessing and postprocessing for various forms of data, develop data visualization and business intelligence tools, and implement internal process improvements for automating manual data processes. To qualify for this role, you should hold a bachelor's or master's degree in computer science, information technology, or other quantitative fields and have a minimum of 8 years of experience as a data engineer supporting large data transformation initiatives related to machine learning. Strong analytical skills, experience with Azure cloud services, ETLs using Spark, and proficiency in scripting languages like Python and Pyspark are essential requirements for this position. As a team player committed to the success of the team and projects, you will collaborate with various stakeholders to ensure data delivery architecture is consistent and secure across multiple data centers. Join us at Schneider Electric, where we create connected technologies that reshape industries, transform cities, and enrich lives, with a diverse and inclusive culture that values the contribution of every individual. If you are passionate about success and eager to contribute to cutting-edge projects, we invite you to be part of our dynamic team at Schneider Electric in Bangalore, India.,
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are looking for an experienced Data Engineer with strong expertise in Databricks and Azure Data Factory (ADF) to design, build, and manage scalable data pipelines and integration solutions. The ideal candidate will have a solid background in big data technologies, cloud platforms, and data processing frameworks to support enterprise-level data transformation and analytics initiatives.
Posted 1 week ago
6.0 - 9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Senior Consultant - Scrum Master Career Level: D3 Introduction To Role Are you ready to disrupt an industry and change lives? At AstraZeneca, our work directly impacts patients by redefining our ability to develop life-changing medicines. As a Senior Consultant, you'll be part of a hard-working team that empowers the business to perform at its peak, combining ground breaking science with leading digital technology platforms and data. We are seeking an experienced Scrum Master to lead ground-breaking IT projects, collaborating with diverse teams to deliver exceptional business value through agile methodologies. Join us at a crucial stage of our journey in becoming a digital and data-led enterprise, where your expertise will drive scale and speed to deliver exponential growth. Accountabilities Accountable for the delivery of Agile project sprints on time and on budget in accordance with AstraZeneca’s Adaptive Delivery Framework (ADF) and standards. Responsible for planning, leading, organizing, and motivating scrum teams to achieve high performance and quality in delivering user stories. Reacts to flexible work backlogs to meet changing needs and requirements. Manages day-to-day operational aspects of the scrum team and scope to ensure timely project completion. Manages relationships with Business Analysts/Product Owners to ensure business requirements are effectively understood and committed to project iterations/phases. Works with the Snr PM/RTE to lead scrum teams, communicating roles and responsibilities while encouraging teamwork and collaboration. Ensures project work products are complete, current, and stored appropriately. Promotes project improvement mentality across teams. Resolves quality and compliance risks and issues, advancing to the Snr PM/RTE when appropriate. Supports the project team in adhering to all standards including AstraZeneca Adaptive Delivery Framework, quality, compliance, processes, defined technical capabilities, and standard methodologies. Establishes key relationships across other Scrum Managers, PMs, Snr PMs/RTEs, cross-skilled scrum teams, business-facing representatives, and 3rd party supplier groups. Essential Skills/Experience Expert in Agile methodology initiatives across global teams; certified Scrum Master with experience in SAFe 4.0 or equivalent. Operating in a similar role within a global business; 6-9 years in a scrum environment. Experience working closely with Business Analysts/Product Owners; leading cross-skilled personnel across Development, QA, Release Management fields. Comfortable reporting into the Snr Project Manager/Release Train Engineer. Desirable Skills/Experience Detailed knowledge of Agile principles and practices, focusing on SAFe and Scrum. Strong consulting and facilitation skills in leading technical teams in Agile frameworks adoption. Experience working with onshore/offshore personnel within digital web delivery paradigm. Positive relationship building and interpersonal skills; excellent listening and communication skills. Experience working in a global organization across cultural boundaries. Knowledge of Agile techniques: User Stories, ATDD, TDD, Continuous Integration/Testing, Pairing, Automated Testing, Agile Games. Experience with multiple Scrum teams in various contexts; familiarity with Atlassian suite (Jira, Confluence). When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, innovation is at the heart of everything we do. We empower our employees to explore new solutions and experiment with leading-edge technology in a dynamic environment. With countless opportunities for learning and growth, you'll be part of a team that has the backing to innovate and disrupt an industry. Our diverse minds work inclusively together to make a meaningful impact by developing life-changing medicines. With investment behind us, there's no slowing us down as we strive to reach patients in need every day. Ready to make a big impact? Apply now and join us on this exciting journey! Date Posted 23-Jul-2025 Closing Date 24-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 1 week ago
0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Associate Application Developer Bangalore, India The team works within the America’s App Solutions to design, deliver and support software solutions used by the Risk Management function globally. The team is responsible for several applications - both internally developed and third-party products - with a heavy focus on the internally developed .NET/ETL applications. There is a substantial pipeline of work, including market-wide initiatives, Security focused transformation and major cloud migrations. Application Developers are responsible for making application changes to provide the required IT services functionality for one or more applications under a Delivery Team. This includes the development and maintenance of custom applications as well as the customization of products from software vendors. What You’ll Be DOING What will your essential responsibilities include? Implement and document change in accordance with an approved SDLC process s and development standards. Unit testing conducted to make sure changes are of sufficient quality before system testing Assist in impact analysis. Work collaboratively with developers on other teams, internal and external, onshore and offshore. Support analysis and resolution of defects during application testing phases. May serve as a member of a Major Incident Team. L3 support for problem root cause analysis. Resolution of functional defects found during testing. Estimation development. Assist with the creation of operational guidelines. Assist with technical analysis of answer design. Proactively identify and communicate improvement opportunities. You will report to the Application Manager. What You Will BRING We’re looking for someone who has these abilities and skills: Required Skills And Abilities Relevant years of hands on development experience in C#, SQL, .Net Frameworks, Visual Studio, SSMS, TFS/GIT, HTML5, Web API, JSON, XML, REST, .Net Core and Cloud native applications (Azure). Experience with at least one of the modern web development frameworks such as Angular, React, etc. Experience with developing apps in Cloud Technology - Azure Services is essential Web Apps, Web Jobs, Queuing framework, Event Driven Architecture, Data Bricks, ADF, ADLS, Containerization, Monitoring, AKS, Azure Gateway etc. Ability to integrate Third Party SaaS Solution using APIs or Direct Database connectivity. Ability to develop High transaction volume concurrent processing. Proven development skills in one or more programming languages with specific skills within the .net domain. Knowledge of current .net frameworks, SDKs, APIs and libraries. Experience working with APIs and system integrations. Prefer proficiency with multiple application delivery models including Agile, iterative and waterfall. Familiarity with CICD tooling, Harness experience beneficial. Some prior work experience in an insurance or technology field. Bachelor’s degree in the field of computer science, information systems, or a related field preferred. Desired Skills And Abilities Specification of technologies, application architectures and data structures as a basis for application change for internal assets. Producing quality, secure, scalable, high-performing, and resilient designs for new or improved services on both OnPrem and Cloud platform. Lead the systems analysts, developers, and testers in sympathetic change to the applications. For internal assets, support Application Managers to develop and maintain the Product Roadmap and assist in defining, analyzing, planning, measuring and improving product availability and continuity. Define and maintain development standards such as system and data design, coding, etc. Maintain a capacity plan with historical performance metrics, a future forecast, and a capacity model to ensure services and infrastructure deliver performance and growth targets in a cost effective and proactive manner. Manage architecture exceptions for the application, including identifying, documenting, taking through exception approval process, and remediation where and when possible. Monitor application services to ensure performance consistently meets non-functional requirements (response time, security, etc.). Manage AXA XL security standards for the applications, including clean code, vulnerability identification and remediation, penetration test etc. Bachelor’s degree in computer science or equivalent. Able to organize self and others including effective scheduling, prioritization, and time management skills, completing tasks to tight deadlines. Good verbal, written, and presentation skills. Ability to build effective working relationships (Internally/Externally), establishing credibility amongst a wide and demanding client group. Comfortable taking ownership and accountability for own work and for the team, identifying the need for action (using initiative) whilst working effectively within a team. Who WE are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and enables business growth and is critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most inclusive workforce possible, and create a culture where everyone can bring their full selves to work and reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe. Robust support for Flexible Working Arrangements Enhanced family-friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides competitive compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far-reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability.
Posted 1 week ago
6.0 years
0 Lacs
Delhi, India
On-site
Job Title: Lead Azure Data Engineer Experience Level: Mid - Senior Level Location: Delhi Duration: Fulltime Experience Required: 6-8+ Years Description: We are seeking a highly skilled and experienced Lead Azure Data Engineer to join our team. The ideal candidate will have a strong background in data engineering, with a focus on working with Databricks, PySpark, Scala-Spark, and advanced SQL. This role requires hands-on experience in implementing or migrating projects to Unity Catalog, optimizing performance on Databricks Spark, and orchestrating workflows using various tools. Must Have Skills: MS Fabric ADF (Azure Data Factory) Azure Synapse Key Responsibilities: Data engineering and analytics project delivery experience Minimum 6 years Min. 2 project done in past of Databricks Migration (Ex. Hadoop to Databricks, Teradata to Databricks, Oracle to Databricks, Talend to Databricks etc) Hands on with Advanced SQL and Pyspark and/or Scala Spark Min 3 project done in past on Databricks where performance optimization activity was done Design, develop, and optimize data pipelines and ETL processes using Databricks and Apache Spark. Implement and optimize performance on Databricks Spark, ensuring efficient data processing and management. Develop and validate data formulation and data delivery for Big Data projects. Collaborate with cross-functional teams to define, design, and implement data solutions that meet business requirements. Conduct performance tuning and optimization of complex queries and data models. Manage and orchestrate data workflows using tools such as Databricks Workflow, Azure Data Factory (ADF), Apache Airflow, and/or AWS Glue. Maintain and ensure data security, quality, and governance throughout the data lifecycle. Technical Skills: • Extensive experience with PySpark and Scala-Spark. • Advanced SQL skills for complex data manipulation and querying. • Proven experience in performance optimization on Databricks Spark across at least three projects. • Hands-on experience with data formulation and data delivery validation in Big Data projects. • Experience in data orchestration using at least two of the following: Databricks Workflow, Azure Data Factory (ADF), Apache Airflow, AWS Glue. • Experience in Azure Synapse
Posted 1 week ago
6.0 - 11.0 years
8 - 12 Lacs
Chennai
Work from Office
Skills : Azure/AWS, Synapse, Fabric, PySpark, Databricks, ADF, Medallion Architecture, Lakehouse, Data Warehousing Experience : 6+ Years Locations : Chennai, Bangalore, Pune, Coimbatore Work from Office
Posted 1 week ago
8.0 years
0 Lacs
Kerala
On-site
Job Role Senior Dot Net Developer Experience 8+ years Notice period Immediate Location Trivandrum / Kochi Introduction Candidates with 8+ years of experience in IT industry and with strong .Net/.Net Core/Azure Cloud Service/ Azure DevOps. This is a client facing role and hence should have strong communication skills. This is for a US client and the resource should be hands-on - experience in coding and Azure Cloud. Working hours - 8 hours , with a 4 hours of overlap during EST Time zone. ( 12 PM - 9 PM) This overlap hours is mandatory as meetings happen during this overlap hours Responsibilities include: Design, develop, enhance, document, and maintain robust applications using .NET Core 6/8+, C#, REST APIs, T-SQL, and modern JavaScript/jQuery Integrate and support third-party APIs and external services Collaborate across cross-functional teams to deliver scalable solutions across the full technology stack Identify, prioritize, and execute tasks throughout the Software Development Life Cycle (SDLC) Participate in Agile/Scrum ceremonies and manage tasks using Jira Understand technical priorities, architectural dependencies, risks, and implementation challenges Troubleshoot, debug, and optimize existing solutions with a strong focus on performance and reliability • Certifications : Microsoft Certified: Azure Fundamentals Microsoft Certified: Azure Developer Associate Other relevant certifications in Azure, .NET, or Cloud technologies Primary Skills : 8+ years of hands-on development experience with: C#, .NET Core 6/8+, Entity Framework / EF Core JavaScript, jQuery, REST APIs Expertise in MS SQL Server, including: Complex SQL queries, Stored Procedures, Views, Functions, Packages, Cursors, Tables, and Object Types Skilled in unit testing with XUnit, MSTest Strong in software design patterns, system architecture, and scalable solution design Ability to lead and inspire teams through clear communication, technical mentorship, and ownership Strong problem-solving and debugging capabilities Ability to write reusable, testable, and efficient code Develop and maintain frameworks and shared libraries to support large-scale applications Excellent technical documentation, communication, and leadership skills Microservices and Service-Oriented Architecture (SOA) Experience in API Integrations 2+ years of hands with Azure Cloud Services, including: Azure Functions Azure Durable Functions Azure Service Bus, Event Grid, Storage Queues Blob Storage, Azure Key Vault, SQL Azure Application Insights, Azure Monitoring Secondary Skills: Familiarity with AngularJS, ReactJS, and other front-end frameworks Experience with Azure API Management (APIM) Knowledge of Azure Containerization and Orchestration (e.g., AKS/Kubernetes) Experience with Azure Data Factory (ADF) and Logic Apps Exposure to Application Support and operational monitoring Azure DevOps - CI/CD pipelines (Classic / YAML)
Posted 1 week ago
8.0 years
15 - 20 Lacs
Cochin
On-site
Job Role : Senior Dot Net Developer Experience: 8+ years Location : Trivandrum / Kochi Introduction Candidates with 8+ years of experience in IT industry and with strong .Net/.Net Core/Azure Cloud Service/ Azure DevOps. This is a client facing role and hence should have strong communication skills. This is for a US client and the resource should be hands-on - experience in coding and Azure Cloud. Working hours - 8 hours , with a 4 hours of overlap during EST Time zone. ( 12 PM - 9 PM) This overlap hours is mandatory as meetings happen during this overlap hours Responsibilities include: Design, develop, enhance, document, and maintain robust applications using .NET Core 6/8+, C#, REST APIs, T-SQL, and modern JavaScript/jQuery Integrate and support third-party APIs and external services Collaborate across cross-functional teams to deliver scalable solutions across the full technology stack Identify, prioritize, and execute tasks throughout the Software Development Life Cycle (SDLC) Participate in Agile/Scrum ceremonies and manage tasks using Jira Understand technical priorities, architectural dependencies, risks, and implementation challenges Troubleshoot, debug, and optimize existing solutions with a strong focus on performance and reliability Certifications : Microsoft Certified: Azure Fundamentals Microsoft Certified: Azure Developer Associate Other relevant certifications in Azure, .NET, or Cloud technologies Primary Skills : 8+ years of hands-on development experience with: C#, .NET Core 6/8+, Entity Framework / EF Core JavaScript, jQuery, REST APIs Expertise in MS SQL Server, including: Complex SQL queries, Stored Procedures, Views, Functions, Packages, Cursors, Tables, and Object Types Skilled in unit testing with XUnit, MSTest Strong in software design patterns, system architecture, and scalable solution design Ability to lead and inspire teams through clear communication, technical mentorship, and ownership Strong problem-solving and debugging capabilities Ability to write reusable, testable, and efficient code Develop and maintain frameworks and shared libraries to support large-scale applications Excellent technical documentation, communication, and leadership skills Microservices and Service-Oriented Architecture (SOA) Experience in API Integrations 2+ years of hands with Azure Cloud Services, including: Azure Functions Azure Durable Functions Azure Service Bus, Event Grid, Storage Queues Blob Storage, Azure Key Vault, SQL Azure Application Insights, Azure Monitoring Secondary Skills: Familiarity with AngularJS, ReactJS, and other front-end frameworks Experience with Azure API Management (APIM) Knowledge of Azure Containerization and Orchestration (e.g., AKS/Kubernetes) Experience with Azure Data Factory (ADF) and Logic Apps Exposure to Application Support and operational monitoring Azure DevOps - CI/CD pipelines (Classic / YAML) Job Type: Full-time Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Experience: .NET Core: 8 years (Preferred) Azure: 2 years (Preferred) Work Location: In person
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
Thiruvananthapuram
On-site
5 - 7 Years 1 Opening Trivandrum Role description Additional Comments: Senior Data Streaming Engineer Build, and maintain a real-time, file-based streaming data platform leveraging open-source technologies. The ideal candidate will have experience with Kubernetes (K8s), Apache Kafka, and Java multithreading, and will be responsible for: • Developing a highly performant, scalable streaming architecture optimized for high throughput and low memory overhead • Implementing auto-scaling solutions to support variable data loads efficiently • Integrating reference data enrichment workflows using Snowflake • Ensuring system reliability and real-time processing across distributed environments • Collaborating with cross-functional teams to deliver robust, cloud-native data solutions • Build scalable and optimized ETL/ELT workflows leveraging Azure Data Factory (ADF) and Apache Spark within Databricks. Skills Azure,KAFKA,JAVA,KUBERENETES Skills Azure,KAFKA,JAVA,KUBERENETES About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 1 week ago
6.0 years
5 - 7 Lacs
Gurgaon
On-site
About Gartner IT: Join a world-class team of skilled engineers who build creative digital solutions to support our colleagues and clients. We make a broad organizational impact by delivering cutting-edge technology solutions that power Gartner. Gartner IT values its culture of nonstop innovation, an outcome-driven approach to success, and the notion that great ideas can come from anyone on the team. About the role: Lead Analytics Engineer will provide technical expertise in designing and building Modern Data warehouse in Azure Cloud to meet the data needs for various BU in Gartner. You will be part of the Ingestion Team to bring data from multiple sources into the Data warehouse. Collaborate with Dashboard, Analytic & Business Team to build end to end scalable data pipelines. What you will do: Responsible for reviewing and analysis of business requirements and design technical mapping document Build new ETL pipelines using Azure Data Factory and Synapse Design, build, and automate data pipelines and applications to support data scientists and business users with their reporting and analytics needs Collaborate on Data warehouse architecture and technical design discussions Perform and participate in code reviews, peer inspections and technical design and specifications, as well as document and review detailed designs. Provide status reports to the higher management. Help build defining best practices & processes. Maintain Service Levels and department goals for problem resolution. Design and build tabular data models in Azure Analysis Services for seamless integration with Power BI Write efficient SQL queries and DAX (Data Analysis Expressions) to support robust data models, reports, and dashboards Tune and optimize data models and queries for maximum performance and efficient data retrieval. What you will need: 6-8 years experience in Data warehouse design & development Experience in ETL using Azure Data Factory (ADF) Experience in writing complex TSQL procedures in Synapse / SQL Data warehouse. Experience in analyzing complex code and performance tune pipelines. Good knowledge of Azure cloud technology and exposure in Azure cloud components Good understanding of business process and analyzing underlying data Understanding of dimensional and relational modeling Nice to Have: Experience with version control systems (e.g., Git, Subversion) Power BI and AAS Experience for Tabular model design. Experience with Data Intelligence platforms like Databricks Who you are: Effective time management skills and ability to meet deadlines Excellent communications skills interacting with technical and business audience’s Excellent organization, multitasking, and prioritization skills Must possess a willingness and aptitude to embrace new technologies/ideas and master concepts rapidly. Intellectual curiosity, passion for technology and keeping up with new trends Delivering project work on-time within budget with high quality Don’t meet every single requirement? We encourage you to apply anyway. You might just be the right candidate for this, or other roles. #LI-PM3 Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work . What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com . Job Requisition ID:101783 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.
Posted 1 week ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description: QA Automation Engineer As a QA Automation Engineer specializing in Data Warehousing, you will play a critical role in ensuring that our data solutions are of the highest quality. You will work closely with data engineers and analysts to develop, implement, and maintain automated testing frameworks for data validation, ETL processes, data quality, and integration. Your work will ensure that data is accurate, consistent, and performs optimally across our data warehouse systems. Responsibilities Develop and Implement Automation Frameworks: Design, build, and maintain scalable test automation frameworks tailored for data warehousing environments. Test Strategy and Execution: Define and execute automated test strategies for ETL processes, data pipelines, and database integration across a variety of data sources. Data Validation: Implement automated tests to validate data consistency, accuracy, completeness, and transformation logic. Performance Testing: Ensure that the data warehouse systems meet performance benchmarks through automation tools and load testing strategies. Collaborate with Teams: Work closely with data engineers, software developers, and data analysts to understand business requirements and design tests accordingly. Continuous Integration: Integrate automated tests into the CI/CD pipelines, ensuring that testing is part of the deployment process. Defect Tracking and Reporting: Use defect-tracking tools (e.g., JIRA) to log and track issues found during automated testing, ensuring that defects are resolved in a timely manner. Test Data Management: Develop strategies for handling large volumes of test data while maintaining data security and privacy. Tool and Technology Evaluation: Stay current with emerging trends in automation testing for data warehousing and recommend tools, frameworks, and best practices. Job Qualifications: Requirements and skills · At Least 4+ Years Experience Solid understanding of data warehousing concepts (ETL, OLAP, data marts, data vault,star/snowflake schemas, etc.). · Proven experience in building and maintaining automation frameworks using tools like Python, Java, or similar, with a focus on database and ETL testing. · Strong knowledge of SQL for writing complex queries to validate data, test data pipelines, and check transformations. · Experience with ETL tools (e.g., Matillion, Qlik Replicate) and their testing processes. · Performance Testing · Experience with version control systems like Git · Strong analytical and problem-solving skills, with the ability to troubleshoot complex data issues. · Strong communication and collaboration skills. · Attention to detail and a passion for delivering high-quality solutions. · Ability to work in a fast-paced environment and manage multiple priorities. · Enthusiastic about learning new technologies and frameworks. Experience with the following tools and technologies are desired. QLIK Replicate Matillion ETL Snowflake Data Vault Warehouse Design Power BI Azure Cloud – Including Logic App, Azure Functions, ADF
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Dear Aspirants, We at ValueLabs have an Opening for Senior Data Engineer role. Below is the JD for the same. Role: Sr Data Engineer Experience: 5+ Years Preferable Immediate Joiners Primary skill set ; Power BI with strong in SQL functions and ADF Key Responsibilities: Minimum 8 years of relevant experience in data engineering, specifically with Create compelling and interactive reports and dashboards using Power BI Desktop. Design and implement Power BI data models that efficiently integrate with various data sources. Automate report delivery and scheduling using Power Automate or similar tools. Collaborate with business stakeholders to understand reporting needs and translate those into actionable insights. Develop and maintain ETL processes using Azure Data Factory. Design and implement data warehouses using Azure Synapse Analytics. Optimize data storage and retrieval strategies to ensure efficient use of resources and fast query performance. Implement data quality checks and validation processes to ensure accuracy and reliability of data. Act as a technical authority within the team, providing guidance on data engineering principles, Azure platform, and Power BI tools. Design, architect, and implement scalable data pipelines using Azure Data Factory, Azure Synapse Analytics, and other relevant technologies. Ensure adherence to data governance standards and regulations, such as GDPR, HIPAA, etc. Implement robust monitoring and alerting mechanisms to detect and resolve issues proactively. Oversee and manage a team of data engineers, ensuring they meet project deadlines and deliver high-quality work. Develop and implement team guidelines, policies, and procedures to enhance productivity and performance. Mentor and coach team members to improve their skills and career development. Conduct regular one-on-one meetings to discuss progress, address concerns, and set goals. Interested to Explore can apply here or directly send your resume to imranmohammed.1@valuelabs.com
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough