Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 - 20.0 years
35 - 50 Lacs
Bengaluru
Hybrid
Data Architect with Cloud Expert, Data Architecture, Data Integration & Data Engineering ETL/ELT - Talend, Informatica, Apache NiFi. Big Data - Hadoop, Spark Cloud platforms (AWS, Azure, GCP), Redshift, BigQuery Python, SQL, Scala,, GDPR, CCPA
Posted 1 month ago
5.0 - 9.0 years
13 - 17 Lacs
Pune
Work from Office
Diacto is looking for a highly capable Data Architect with 5 to 9 years of experience to lead cloud data platform initiatives with a primary focus on Snowflake and Azure Data Hub. This individual will play a key role in defining the data architecture strategy, implementing robust data pipelines, and enabling enterprise-grade analytics solutions. This is an on-site role based in our Baner, Pune office. Qualifications: B.E./B.Tech in Computer Science, IT, or related discipline MCS/MCA or equivalent preferred Key Responsibilities: Design and implement enterprise-level data architecture with a strong focus on Snowflake and Azure Data Hub Define standards and best practices for data ingestion, transformation, and storage Collaborate with cross-functional teams to develop scalable, secure, and high-performance data pipelines Lead Snowflake environment setup, configuration, performance tuning, and optimization Integrate Azure Data Services with Snowflake to support diverse business use cases Implement governance, metadata management, and security policies Mentor junior developers and data engineers on cloud data technologies and best practices Experience and Skills Required: 5?9 years of overall experience in data architecture or data engineering roles Strong, hands-on expertise in Snowflake, including design, development, and performance tuning Solid experience with Azure Data Hub and Azure Data Services (Data Lake, Synapse, etc.) Understanding of cloud data integration techniques and ELT/ETL frameworks Familiarity with data orchestration tools such as DBT, Airflow, or Azure Data Factory Proven ability to handle structured, semi-structured, and unstructured data Strong analytical, problem-solving, and communication skills Nice to Have: Certifications in Snowflake and/or Microsoft Azure Experience with CI/CD tools like GitHub for code versioning and deployment Familiarity with real-time or near-real-time data ingestio Why Join Diacto Technologies Work with a cutting-edge tech stack and cloud-native architectures Be part of a data-driven culture with opportunities for continuous learning Collaborate with industry experts and build transformative data solutions
Posted 1 month ago
4.0 - 7.0 years
6 - 9 Lacs
Hyderabad, Bengaluru
Hybrid
Job Summary We are seeking a skilled Azure Data Engineer with 4 years of overall experience , including at least 2 years of hands-on experience with Azure Databricks (Must) . The ideal candidate will have strong expertise in building and maintaining scalable data pipelines and working across cloud-based data platforms. Key Responsibilities Design, develop, and optimize large-scale data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse. Implement data lake solutions and work with structured and unstructured datasets in Azure Data Lake Storage (ADLS). Collaborate with data scientists, analysts, and engineering teams to design and deliver end-to-end data solutions. Develop ETL/ELT processes and integrate data from multiple sources. Monitor, debug, and optimize workflows for performance and cost-efficiency. Ensure data governance, quality, and security best practices are maintained. Must-Have Skills 4+ years of total experience in data engineering. 2+ years of experience with Azure Databricks (PySpark, Notebooks, Delta Lake). Strong experience with Azure Data Factory, Azure SQL, and ADLS. Proficient in writing SQL queries and Python/Scala scripting. Understanding of CI/CD pipelines and version control systems (e.g., Git). Solid grasp of data modeling and warehousing concepts. Skills: azure synapse,data modeling,data engineering,azure,azure databricks,azure data lake storage (adls),ci/cd,etl,elt,data warehousing,sql,scala,git,azure data factory,python
Posted 1 month ago
10.0 - 15.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Novo Nordisk Global Business Services ( GBS) India Department - Global Data & Artificial lntelligence Are you passionate about building scalable data pipelines and optimising data workflowsDo you want to work at the forefront of data engineering, collaborating with cross-functional teams to drive innovationIf so, we are looking for a talented Data Engineer to join our Global Data & AI team at Novo Nordisk. Read on and apply today for a life-changing career! The Position As a Senior Data Engineer, you will play a key role in designing, developing, and main-taining data pipelines and integration solutions to support analytics, Artificial Intelligence workflows, and business intelligence. It includes: Design, implement, and maintain scalable data pipelines and integration solutions aligned with the overall data architecture and strategy. Implement data transformation workflows using modern ETL/ELT approaches while establishing best practices for data engineering, including testing methodologies and documentation. Optimize data workflows by harmonizing and securely transferring data across systems, while collaborating with stakeholders to deliver high-performance solutions for analytics and Artificial Intelligence. Monitoring and maintaining data systems to ensure their reliability. Support data governance by ensuring data quality and consistency, while contributing to architectural decisions shaping the data platform's future. Mentoring junior engineers and fostering a culture of engineering excellence. Qualifications Bachelor’s or master’s degree in computer science, Software Development, Engineering. Possess over 10 years of overall professional experience, including more than 4 years of specialized expertise in data engineering. Experience in developing production-grade data pipelines using Python, Data-bricks and Azure cloud, with a strong foundation in software engineering principles. Experience in the clinical data domain, with knowledge of standards such as CDISC SDTM and ADaM (Good to have). Experience working in a regulated industry (Good to have). About the department You will be part of the Global Data & AI team. Our department is globally distributed and has for mission to harness the power of Data and Artificial Intelligence, integrating it seamlessly into the fabric of Novo Nordisk's operations. We serve as the vital link, weaving together the realms of Data and Artificial Intelligence throughout the whole organi-zation, empowering Novo Nordisk to realize its strategic ambitions through our pivotal initiatives. The atmosphere is fast-paced and dynamic, with a strong focus on collaboration and innovation. We work closely with various business domains to create actionable insights and drive commercial excellence.
Posted 1 month ago
5.0 - 10.0 years
8 - 14 Lacs
Navi Mumbai
Work from Office
Data Strategy and Planning: Develop and implement data architecture strategies that align with organizational goals and objectives. Collaborate with business stakeholders to understand data requirements and translate them into actionable plans. Data Modeling: Design and implement logical and physical data models to support business needs. Ensure data models are scalable, efficient, and comply with industry best practices. Database Design and Management: Oversee the design and management of databases, selecting appropriate database technologies based on requirements. Optimize database performance and ensure data integrity and security. Data Integration: Define and implement data integration strategies to facilitate seamless flow of information across. Responsibilities: Experience in data architecture and engineering Proven expertise with Snowflake data platform Strong understanding of ETL/ELT processes and data integration Experience with data modeling and data warehousing concepts Familiarity with performance tuning and optimization techniques Excellent problem-solving skills and attention to detail Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Cloud & Data Architecture: AWS , Snowflake ETL & Data Engineering: AWS Glue, Apache Spark, Step Functions Big Data & Analytics: Athena,Presto, Hadoop Database & Storage: SQL, Snow sql Security & Compliance: IAM, KMS, Data Masking Preferred technical and professional experience Cloud Data Warehousing: Snowflake (Data Modeling, Query Optimization) Data Transformation: DBT (Data Build Tool) for ELT pipeline management Metadata & Data Governance: Alation (Data Catalog, Lineage, Governance
Posted 1 month ago
5.0 - 10.0 years
0 - 0 Lacs
Pune
Work from Office
Ciklum is looking for a Senior Data Engineer to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As a Senior Data Engineer, become a part of a cross-functional development team engineering experiences of tomorrow. This role demands deep technical expertise, hands-on development experience, and the ability to mentor team members while collaborating across engineering and business units. Youll work on high-performance data pipelines, microservices, and analytics solutions that directly support mission-critical systems. Responsibilities: Design, develop, and maintain scalable data pipelines using PL/SQL, Oracle, MongoDB, and related technologies Build data microservices using Java, Spring Boot, and containerization (Docker/Kubernetes) Develop, test, and deploy ETL/ELT processes to support real-time and batch data flows Work with tools like OBIEE, ODI, and Oracle APEX to deliver reporting, dashboards, and data visualization solutions Optimize data processing performance and implement best practices for data reliability, scalability, and integrity Collaborate with cross-functional teams to define data architecture, modeling, and integration strategies Participate in code reviews, troubleshooting, and tuning of high-volume transactional data systems Contribute to Agile development practices under the SAFe framework, including PI planning, system demos, and retrospectives Act as a mentor and technical guide for mid- and junior-level engineers, fostering knowledge sharing and continuous improvement Requirements: 7+ years of experience in data engineering within large-scale, enterprise environments Strong hands-on experience with: PL/SQL, Oracle DB, MongoDB Java, Spring Boot, Microservices ETL/ELT frameworks CI/CD pipelines and DevOps best practices Experience working with Oracle Business Intelligence tools (OBIEE, ODI, APEX) Proficiency in data modeling, data optimization techniques, and performance tuning Solid understanding of data lifecycle management, data security, and governance principles Experience with cloud platforms such as AWS or Azure for data storage and processing Knowledge of data visualization tools such as Power BI or Tableau Excellent analytical, communication, and documentation skills Ability to lead initiatives, mentor team members, and collaborate in cross-functional settings Desirable: Familiarity with Kafka, event-driven architecture, or streaming platforms Working experience in SAFe Agile or other scaled Agile delivery models What's in it for you? Care: your mental and physical health is our priority. We ensure comprehensive company-paid medical insurance, as well as financial and legal consultation Tailored education path: boost your skills and knowledge with our regular internal events (meetups, conferences, workshops), Udemy licence, language courses and company-paid certifications Growth environment: share your experience and level up your expertise with a community of skilled professionals, locally and globally Flexibility: hybrid work mode at Chennai or Pune Opportunities: we value our specialists and always find the best options for them. Our Resourcing Team helps change a project if needed to help you grow, excel professionally and fulfil your potential Global impact: work on large-scale projects that redefine industries with international and fast-growing clients Welcoming environment: feel empowered with a friendly team, open-door policy, informal atmosphere within the company and regular team-building events About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram , Facebook , LinkedIn . Explore, empower, engineer with Ciklum! Experiences of tomorrow. Engineered together Interested already? We would love to get to know you! Submit your application. Can’t wait to see you at Ciklum.
Posted 1 month ago
4.0 - 9.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Job Summary We are seeking a skilled Azure Data Engineer with 4 years of overall experience , including at least 2 years of hands-on experience with Azure Databricks (Must) . The ideal candidate will have strong expertise in building and maintaining scalable data pipelines and working across cloud-based data platforms. Key Responsibilities Design, develop, and optimize large-scale data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse. Implement data lake solutions and work with structured and unstructured datasets in Azure Data Lake Storage (ADLS). Collaborate with data scientists, analysts, and engineering teams to design and deliver end-to-end data solutions. Develop ETL/ELT processes and integrate data from multiple sources. Monitor, debug, and optimize workflows for performance and cost-efficiency. Ensure data governance, quality, and security best practices are maintained. Must-Have Skills 4+ years of total experience in data engineering. 2+ years of experience with Azure Databricks (PySpark, Notebooks, Delta Lake). Strong experience with Azure Data Factory, Azure SQL, and ADLS. Proficient in writing SQL queries and Python/Scala scripting. Understanding of CI/CD pipelines and version control systems (e.g., Git). Solid grasp of data modeling and warehousing concepts. Skills: azure synapse,data modeling,data engineering,azure,azure databricks,azure data lake storage (adls),ci/cd,etl,elt,data warehousing,sql,scala,git,azure data factory,python
Posted 1 month ago
8.0 - 12.0 years
12 - 22 Lacs
Hyderabad, Secunderabad
Work from Office
Proficiency in SQL, Python, and data pipeline frameworks such as Apache Spark, Databricks, or Airflow. Hands-on experience with cloud data platforms (e.g., Azure Synapse, AWS Redshift, Google BigQuery). Strong understanding of data modeling, ETL/ELT, and data lake/warehouse/ Datamart architectures. Knowledge on Data Factory or AWS Glue Experience in developing reports and dashboards using tools like Power BI, Tableau, or Looker.
Posted 1 month ago
10.0 - 15.0 years
22 - 37 Lacs
Bengaluru
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As an AWS Data Engineer at Kyndryl, you will be responsible for designing, building, and maintaining scalable, secure, and high-performing data pipelines using AWS cloud-native services. This role requires extensive hands-on experience with both real-time and batch data processing, expertise in cloud-based ETL/ELT architectures, and a commitment to delivering clean, reliable, and well-modeled datasets. Key Responsibilities: Design and develop scalable, secure, and fault-tolerant data pipelines utilizing AWS services such as Glue, Lambda, Kinesis, S3, EMR, Step Functions, and Athena. Create and maintain ETL/ELT workflows to support both structured and unstructured data ingestion from various sources, including RDBMS, APIs, SFTP, and Streaming. Optimize data pipelines for performance, scalability, and cost-efficiency. Develop and manage data models, data lakes, and data warehouses on AWS platforms (e.g., Redshift, Lake Formation). Collaborate with DevOps teams to implement CI/CD and infrastructure as code (IaC) for data pipelines using CloudFormation or Terraform. Ensure data quality, validation, lineage, and governance through tools such as AWS Glue Data Catalog and AWS Lake Formation. Work in concert with data scientists, analysts, and application teams to deliver data-driven solutions. Monitor, troubleshoot, and resolve issues in production pipelines. Stay abreast of AWS advancements and recommend improvements where applicable. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience Bachelor’s or master’s degree in computer science, Engineering, or a related field Over 8 years of experience in data engineering More than 3 years of experience with the AWS data ecosystem Strong experience with Pyspark, SQL, and Python Proficiency in AWS services: Glue, S3, Redshift, EMR, Lambda, Kinesis, CloudWatch, Athena, Step Functions Familiarity with data modelling concepts, dimensional models, and data lake architectures Experience with CI/CD, GitHub Actions, CloudFormation/Terraform Understanding of data governance, privacy, and security best practices Strong problem-solving and communication skills Preferred Skills and Experience Experience working as a Data Engineer and/or in cloud modernization. Experience with AWS Lake Formation and Data Catalog for metadata management. Knowledge of Databricks, Snowflake, or BigQuery for data analytics. AWS Certified Data Engineer or AWS Certified Solutions Architect is a plus. Strong problem-solving and analytical thinking. Excellent communication and collaboration abilities. Ability to work independently and in agile teams. A proactive approach to identifying and addressing challenges in data workflows. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 month ago
10.0 - 15.0 years
35 - 40 Lacs
Hyderabad
Hybrid
Role Description: The Director for Data Architecture and Solutions will lead Amgens enterprise data architecture and solutions strategy, overseeing the design, integration, and deployment of scalable, secure, and future-ready data systems. This leader will define the architectural vision and guide a high-performing team of architects and technical experts to implement data and analytics solutions that drive business value and innovation. This role demands a strong blend of business acumen, deep technical expertise, and strategic thinking to align data capabilities with the company's mission and growth. The Director will also serve as a key liaison with executive leadership, influencing technology investment and enterprise data direction . Roles & Responsibilities: Develop and own the enterprise data architecture and solutions roadmap, aligned with Amgens business strategy and digital transformation goals. Provide executive leadership and oversight of data architecture initiatives across business domains (R&D, Commercial, Manufacturing, etc.). Lead and grow a high-impact team of data and solution architects. Coach, mentor, and foster innovation and continuous improvement in the team. Design and promote modern data architectures (data mesh, data fabric, lakehouse etc.) across hybrid cloud environments and enable for AI readiness. Collaborate with stakeholders to define solution blueprints, integrating business requirements with technical strategy to drive value. Drive enterprise-wide adoption of data modeling, metadata management, and data lineage standards. Ensure solutions meet enterprise-grade requirements for security, performance, scalability, compliance, and data governance. Partner closely with Data Engineering, Analytics, AI/ML, and IT Security teams to operationalize data solutions that enable advanced analytics and decision-making. Champion innovation and continuous evolution of Amgens data and analytics landscape through new technologies and industry best practices. Communicate architectural strategy and project outcomes to executive leadership and other non-technical stakeholders. Functional Skills: Must-Have Skills: 10+ years of experience in data architecture or solution architecture leadership roles, including experience at the enterprise level. Proven experience leading architecture strategy and delivery in the life sciences or pharmaceutical industry. Expertise in cloud platforms (AWS, Azure, or GCP) and modern data technologies (data lakes, APIs, ETL/ELT frameworks). Strong understanding of data governance, compliance (e.g., HIPAA, GxP), and data privacy best practices. Demonstrated success managing cross-functional, global teams and large-scale data programs. Experience with enterprise architecture frameworks (TOGAF, Zachman, etc.). Proven leadership skills with a track record of managing and mentoring high-performing data architecture teams. Good-to-Have Skills: Masters or doctorate in Computer Science, Engineering, or related field. Certifications in cloud architecture (AWS, GCP, Azure). Experience integrating AI/ML solutions into enterprise Data Achitecture. Familiarity with DevOps, CI/CD pipelines, and Infrastructure as Code (Terraform, CloudFormation). Scaled Agile or similar methodology experience. Leadership and Communication Skills: Strategic thinker with the ability to influence at the executive level. Strong executive presence with excellent communication and storytelling skills. Ability to lead in a matrixed, global environment with multiple stakeholders. Highly collaborative, proactive, and business-oriented mindset. Strong organizational and prioritization skills to manage complex initiatives. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Basic Qualifications: Doctorate degree and 2 years of Information Systems experience, or Masters degree and 6 years of Information Systems experience, or Bachelors degree and 8 years of Information Systems experience, or Associates degree and 10 years of Information Systems experience, or 4 years of managerial experience directly managing people and leadership experience leading teams, projects, or programs.
Posted 1 month ago
4.0 - 9.0 years
8 - 18 Lacs
Navi Mumbai, Pune, Mumbai (All Areas)
Hybrid
Job Description : Job Overview: We are seeking a highly skilled Data Engineer with expertise in SQL, Python, Data Warehousing, AWS, Airflow, ETL, and Data Modeling . The ideal candidate will be responsible for designing, developing, and maintaining robust data pipelines, ensuring efficient data processing and integration across various platforms. This role requires strong problem-solving skills, an analytical mindset, and a deep understanding of modern data engineering frameworks. Key Responsibilities: Design, develop, and optimize scalable data pipelines and ETL processes to support business intelligence, analytics, and operational data needs. Build and maintain data models (conceptual, logical, and physical) to enhance data storage, retrieval, and transformation efficiency. Develop, test, and optimize complex SQL queries for efficient data extraction, transformation, and loading (ETL). Implement and manage data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) for structured and unstructured data storage. Work with AWS, Azure , and cloud-based data solutions to build high-performance data ecosystems. Utilize Apache Airflow for orchestrating workflows and automating data pipeline execution. Collaborate with cross-functional teams to understand business data requirements and ensure alignment with data strategies. Ensure data integrity, security, and compliance with governance policies and best practices. Monitor, troubleshoot, and improve the performance of existing data systems for scalability and reliability. Stay updated with emerging data engineering technologies, frameworks, and best practices to drive continuous improvement. Required Skills & Qualifications: Proficiency in SQL for query development, performance tuning, and optimization. Strong Python programming skills for data processing, automation, and scripting. Hands-on experience with ETL development , data integration, and transformation workflows. Expertise in data modeling for efficient database and data warehouse design. Experience with cloud platforms such as AWS (S3, Redshift, Lambda), Azure, or GCP. Working knowledge of Airflow or similar workflow orchestration tools. Familiarity with Big Data frameworks like Hadoop or Spark (preferred but not mandatory). Strong problem-solving skills and ability to work in a fast-paced, dynamic environment. Role & responsibilities Preferred candidate profile
Posted 1 month ago
3.0 - 5.0 years
8 - 10 Lacs
Greater Noida
Work from Office
Seeking a software testing expert to lead software testing for LED TVs. Must have exp. in embedded systems, Android/Linux, manual & automated testing, multimedia/display validation, and tools like JIRA. Strong teamwork & leadership skills required. Required Candidate profile Experience: 3+ years in software testing (preferably in the LED TV or mobile device industry). Education: B.Tech/B.E. in Electronics, Computer Science, or a related field.
Posted 1 month ago
3.0 - 5.0 years
15 - 17 Lacs
Pune
Work from Office
Performance Testing Specialist Databricks Pipelines Key Responsibilities: - Design and execute performance testing strategies specifically for Databricks-based data pipelines. - Identify performance bottlenecks and provide optimization recommendations across Spark/Databricks workloads. - Collaborate with development and DevOps teams to integrate performance testing into CI/CD pipelines. - Analyze job execution metrics, cluster utilization, memory/storage usage, and latency across various stages of data pipeline processing. - Create and maintain performance test scripts, frameworks, and dashboards using tools like JMeter, Locust, or custom Python utilities. - Generate detailed performance reports and suggest tuning at the code, configuration, and platform levels. - Conduct root cause analysis for slow-running ETL/ELT jobs and recommend remediation steps. - Participate in production issue resolution related to performance and contribute to RCA documentation. Technical Skills: Mandatory - Strong understanding of Databricks, Apache Spark, and performance tuning techniques for distributed data processing systems. - Hands-on experience in Spark (PySpark/Scala) performance profiling, partitioning strategies, and job parallelization. - 2+ years of experience in performance testing and load simulation of data pipelines. - Solid skills in SQL, Snowflake, and analyzing performance via query plans and optimization hints. - Familiarity with Azure Databricks, Azure Monitor, Log Analytics, or similar observability tools. - Proficient in scripting (Python/Shell) for test automation and pipeline instrumentation. - Experience with DevOps tools such as Azure DevOps, GitHub Actions, or Jenkins for automated testing. - Comfortable working in Unix/Linux environments and writing shell scripts for monitoring and debugging. Good to Have - Experience with job schedulers like Control-M, Autosys, or Azure Data Factory trigger flows. - Exposure to CI/CD integration for automated performance validation. - Understanding of network/storage I/O tuning parameters in cloud-based environments.
Posted 1 month ago
5.0 - 9.0 years
7 - 17 Lacs
Pune
Work from Office
Job Overview: Diacto is looking for a highly capable Data Architect with 5 to 9 years of experience to lead cloud data platform initiatives with a primary focus on Snowflake and Azure Data Hub. This individual will play a key role in defining the data architecture strategy, implementing robust data pipelines, and enabling enterprise-grade analytics solutions. This is an on-site role based in our Baner, Pune office. Qualifications: B.E./B.Tech in Computer Science, IT, or related discipline MCS/MCA or equivalent preferred Key Responsibilities: Design and implement enterprise-level data architecture with a strong focus on Snowflake and Azure Data Hub Define standards and best practices for data ingestion, transformation, and storage Collaborate with cross-functional teams to develop scalable, secure, and high-performance data pipelines Lead Snowflake environment setup, configuration, performance tuning, and optimization Integrate Azure Data Services with Snowflake to support diverse business use cases Implement governance, metadata management, and security policies Mentor junior developers and data engineers on cloud data technologies and best practices Experience and Skills Required: 5 to 9 years of overall experience in data architecture or data engineering roles Strong, hands-on expertise in Snowflake , including design, development, and performance tuning Solid experience with Azure Data Hub and Azure Data Services (Data Lake, Synapse, etc.) Understanding of cloud data integration techniques and ELT/ETL frameworks Familiarity with data orchestration tools such as DBT, Airflow , or Azure Data Factory Proven ability to handle structured, semi-structured, and unstructured data Strong analytical, problem-solving, and communication skills Nice to Have: Certifications in Snowflake and/or Microsoft Azure Experience with CI/CD tools like GitHub for code versioning and deployment Familiarity with real-time or near-real-time data ingestion Why Join Diacto Technologies? Work with a cutting-edge tech stack and cloud-native architectures Be part of a data-driven culture with opportunities for continuous learning Collaborate with industry experts and build transformative data solutions Competitive salary and benefits with a collaborative work environment in Baner, Pune How to Apply: Option 1 (Preferred) Copy and paste the following link on your browser and submit your application for automated interview process : - https://app.candidhr.ai/app/candidate/gAAAAABoRrcIhRQqJKDXiCEfrQG8Rtsk46Etg4-K8eiwqJ_GELL6ewSC9vl4BjaTwUAHzXZTE3nOtgaiQLCso_vWzieLkoV9Nw==/ Option 2 1. Please visit our website's career section at https://www.diacto.com/careers/ 2. Scroll down to the " Who are we looking for ?" section 3. Find the listing for " Data Architect (Snowflake) " and 4. Proceed with the virtual interview by clicking on " Apply Now ."
Posted 1 month ago
15.0 - 20.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Fabric Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning to align application development with organizational goals, ensuring that the solutions provided are effective and efficient. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Facilitate regular team meetings to discuss progress and address any roadblocks. Professional & Technical Skills: Lead and manage a team of data engineers, providing guidance, mentorship, and support.Foster a collaborative and innovative team culture. Work closely with stakeholders to understand data requirements and business objectives.Translate business requirements into technical specifications for the Data Warehouse.Lead the design of data models, ensuring they meet business needs and adhere to best practices.Collaborate with the Technical Architect to design dimensional models for optimal performance.Design and implement data pipelines for ingestion, transformation, and loading (ETL/ELT) using Fabric Data Factory Pipeline and Dataflows Gen2.Develop scalable and reliable solutions for batch data integration across various structured and unstructured data sources.Oversee the development of data pipelines for smooth data flow into the Fabric Data Warehouse.Implement and maintain data solutions in Fabric Lakehouse and Fabric Warehouse.Monitor and optimize pipeline performance, ensuring minimal latency and resource efficiency.Tune data processing workloads for large datasets in Fabric Warehouse and Lakehouse.Exposure in ADF and DataBricks Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Fabric.- This position is based in Hyderabad.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
6.0 - 11.0 years
22 - 25 Lacs
Kochi, Bengaluru, Thiruvananthapuram
Work from Office
Candidate Skill: Technical Skills Informatica IDMC, CDI, CAI, ETL, ELT, SQL, Oracle, SQL Server, Data Quality, Cloud Integration, Operational Dashboards Job Description: We are looking for an experienced Informatica Developer with a specialization in Informatica Intelligent Data Management Cloud (IDMC). The ideal candidate should have hands-on experience with Cloud Data Integration (CDI) and Cloud Application Integration (CAI) and be skilled in building, deploying, and optimizing cloud-based ETL/ELT solutions. Key Responsibilities:Design, develop, and maintain data pipelines and integrations using Informatica IDMCWork on Cloud Data Integration (CDI) and Cloud Application Integration (CAI) modulesBuild and optimize ETL/ELT mappings, workflows, and data quality rules in a cloud setupDeploy and monitor data jobs using IDMCs operational dashboards and alerting toolsCollaborate with data architects and business analysts to understand data integration requirementsWrite and optimize SQL queries for data processingWork with RDBMS systems such as Oracle or SQL ServerTroubleshoot and resolve integration issues efficientlyEnsure performance tuning and high availability of data solutions Required Skills:Strong hands-on experience with Informatica IDMCProficiency in CDI, CAI, and cloud-based data workflowsSolid understanding of ETL/ELT processes, data quality, and data integration best practicesExpertise in SQL and working with Oracle/SQL ServerStrong analytical and problem-solving skillsExcellent communication and interpersonal abilitiesTechnical Key Skills: Informatica IDMC, CDI, CAI, ETL, ELT, SQL, Oracle, SQL Server, Data Quality, Cloud Integration, Operational Dashboards
Posted 1 month ago
12.0 - 15.0 years
35 - 50 Lacs
Hyderabad
Work from Office
Skill : Java, Spark, Kafka Experience : 10 to 16 years Location : Hyderabad As Data Engineer, you will : Support in designing and rolling out the data architecture and infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources Identify data source, design and implement data schema/models and integrate data that meet the requirements of the business stakeholders Play an active role in the end-to-end delivery of AI solutions, from ideation, feasibility assessment, to data preparation and industrialization. Work with business, IT and data stakeholders to support with data-related technical issues, their data infrastructure needs as well as to build the most flexible and scalable data platform. With a strong focus on DataOps, design, develop and deploy scalable batch and/or real-time data pipelines. Design, document, test and deploy ETL/ELT processes Find the right tradeoffs between the performance, reliability, scalability, and cost of the data pipelines you implement Monitor data processing efficiency and propose solutions for improvements. • Have the discipline to create and maintain comprehensive project documentation. • Build and share knowledge with colleagues and coach junior profiles.
Posted 1 month ago
2.0 - 3.0 years
6 - 7 Lacs
Pune
Work from Office
Data Engineer Job Description : Jash Data Sciences: Letting Data Speak! Do you love solving real-world data problems with the latest and best techniques? And having fun while solving them in a team! Then come and join our high-energy team of passionate data people. Jash Data Sciences is the right place for you. We are a cutting-edge Data Sciences and Data Engineering startup based in Pune, India. We believe in continuous learning and evolving together. And we let the data speak! What will you be doing? You will be discovering trends in the data sets and developing algorithms to transform raw data for further analytics Create Data Pipelines to bring in data from various sources, with different formats, transform it, and finally load it to the target database. Implement ETL/ ELT processes in the cloud using tools like AirFlow, Glue, Stitch, Cloud Data Fusion, and DataFlow. Design and implement Data Lake, Data Warehouse, and Data Marts in AWS, GCP, or Azure using Redshift, BigQuery, PostgreSQL, etc. Creating efficient SQL queries and understanding query execution plans for tuning queries on engines like PostgreSQL. Performance tuning of OLAP/ OLTP databases by creating indices, tables, and views. Write Python scripts for the orchestration of data pipelines Have thoughtful discussions with customers to understand their data engineering requirements. Break complex requirements into smaller tasks for execution. What do we need from you? Strong Python coding skills with basic knowledge of algorithms/data structures and their application. Strong understanding of Data Engineering concepts including ETL, ELT, Data Lake, Data Warehousing, and Data Pipelines. Experience designing and implementing Data Lakes, Data Warehouses, and Data Marts that support terabytes of scale data. A track record of implementing Data Pipelines on public cloud environments (AWS/GCP/Azure) is highly desirable A clear understanding of Database concepts like indexing, query performance optimization, views, and various types of schemas. Hands-on SQL programming experience with knowledge of windowing functions, subqueries, and various types of joins. Experience working with Big Data technologies like PySpark/ Hadoop A good team player with the ability to communicate with clarity Show us your git repo/ blog! Qualification 1-2 years of experience working on Data Engineering projects for Data Engineer I 2-5 years of experience working on Data Engineering projects for Data Engineer II 1-5 years of Hands-on Python programming experience Bachelors/Masters' degree in Computer Science is good to have Courses or Certifications in the area of Data Engineering will be given a higher preference. Candidates who have demonstrated a drive for learning and keeping up to date with technology by continuing to do various courses/self-learning will be given high preference.
Posted 1 month ago
3.0 - 5.0 years
8 - 12 Lacs
Hyderabad
Work from Office
Data Engineer openings at Advantum Health Pvt Ltd, Hyderabad. Overview: We are looking for a Data Engineer to build and optimize robust data pipelines that support AI and RCM analytics. This role involves integrating structured and unstructured data from diverse healthcare systems into scalable, AI-ready datasets. Key Responsibilities: Design, implement, and optimize data pipelines for ingesting and transforming healthcare and RCM data. Build data marts and warehouses to support analytics and machine learning. Ensure data quality, lineage, and governance across AI use cases. Integrate data from EMRs, billing platforms, claims databases, and third-party APIs. Support data infrastructure in a HIPAA-compliant cloud environment. Qualifications: Bachelors in Computer Science, Data Engineering, or related field. 3+ years of experience with ETL/ELT pipelines using tools like Apache Airflow, dbt, or Azure Data Factory. Strong SQL and Python skills. Experience with healthcare data standards (HL7, FHIR, X12) preferred. Familiarity with data lake house architectures and AI integration best practices Ph: 9177078628 Email id: jobs@advantumhealth.com Address: Advantum Health Private Limited, Cyber gateway, Block C, 4th floor Hitech City, Hyderabad. Do follow us on LinkedIn, Facebook, Instagram, YouTube and Threads Advantum Health LinkedIn Page: https://lnkd.in/gVcQAXK3 Advantum Health Facebook Page: https://lnkd.in/g7ARQ378 Advantum Health Instagram Page: https://lnkd.in/gtQnB_Gc Advantum Health India YouTube link: https://lnkd.in/g_AxPaPp Advantum Health Threads link: https://lnkd.in/gyq73iQ6
Posted 1 month ago
8.0 - 13.0 years
25 - 40 Lacs
Bengaluru
Remote
Bachelor'sMaster's Overview: We are seeking a highly skilled and experienced Celonis MDM Data Architect to lead the design, implementation, and optimization of our Master Data Management (MDM) solutions in alignment with Celonis Process Mining and Execution Management System (EMS) capabilities. The ideal candidate will play a key role in bridging data architecture and business process insights, ensuring data quality, consistency, and governance across the enterprise. Key Responsibilities: Design and implement MDM architecture and data models aligned with enterprise standards and best practices. • Lead the integration of Celonis with MDM platforms to drive intelligent process automation, data governance, and operational efficiencies. • Collaborate with business stakeholders and data stewards to define MDM policies, rules, and processes. • Support data profiling, data cleansing, and data harmonization efforts to improve master data quality. • Work closely with Celonis analysts, data engineers, and process owners to deliver actionable insights based on MDM-aligned process data. • Develop and maintain scalable, secure, and high-performance data pipelines and integration architectures. • Translate business requirements into technical solutions, ensuring alignment with both MDM and Celonis data models. • Create and maintain data architecture documentation, data dictionaries, and metadata repositories. • Monitor and optimize the performance of MDM systems and Celonis EMS integrations. Qualifications: Bachelors or Masters degree in Computer Science, Information Systems, Data Engineering, or a related field. • 7+ years of experience in data architecture, MDM, or enterprise data management. • 2+ years of hands-on experience with Celonis and process mining tools. • Proficient in MDM platforms (e.g., Informatica MDM, SAP MDG, Oracle MDM, etc.). • Strong knowledge of data modeling, data governance, and metadata management. • Proficiency in SQL, data integration tools (e.g., ETL/ELT platforms), and APIs. • Deep understanding of business process management and data-driven transformation initiatives.
Posted 1 month ago
7.0 - 12.0 years
22 - 37 Lacs
Mumbai, Navi Mumbai, Mumbai (All Areas)
Hybrid
Hiring: Data Engineering Senior Software Engineer / Tech Lead / Senior Tech Lead - Hybrid (3 Days from office) | Shift: 2 PM 11 PM IST - Experience: 5 to 12+ years (based on role & grade) Open Grades/Roles : Senior Software Engineer : 58 Years Tech Lead : 7–10 Years Senior Tech Lead : 10–12+ Years Job Description – Data Engineering Team Core Responsibilities (Common to All Levels) : Design, build and optimize ETL/ELT pipelines using tools like Pentaho , Talend , or similar Work on traditional databases (PostgreSQL, MSSQL, Oracle) and MPP/modern systems (Vertica, Redshift, BigQuery, MongoDB) Collaborate cross-functionally with BI, Finance, Sales, and Marketing teams to define data needs Participate in data modeling (ER/DW/Star schema) , data quality checks , and data integration Implement solutions involving messaging systems (Kafka) , REST APIs , and scheduler tools (Airflow, Autosys, Control-M) Ensure code versioning and documentation standards are followed (Git/Bitbucket) Additional Responsibilities by Grade Senior Software Engineer (5–8 Yrs) : Focus on hands-on development of ETL pipelines, data models, and data inventory Assist in architecture discussions and POCs Good to have: Tableau/Cognos, Python/Perl scripting, GCP exposure Tech Lead (7–10 Yrs) : Lead mid-sized data projects and small teams Decide on ETL strategy (Push Down/Push Up) and performance tuning Strong working knowledge of orchestration tools, resource management, and agile delivery Senior Tech Lead (10–12+ Yrs) : Drive data architecture , infrastructure decisions , and internal framework enhancements Oversee large-scale data ingestion, profiling, and reconciliation across systems Mentoring junior leads and owning stakeholder delivery end-to-end Advantageous: Experience with AdTech/Marketing data , Hadoop ecosystem (Hive, Spark, Sqoop) - Must-Have Skills (All Levels): ETL Tools: Pentaho / Talend / SSIS / Informatica Databases: PostgreSQL, Oracle, MSSQL, Vertica / Redshift / BigQuery Orchestration: Airflow / Autosys / Control-M / JAMS Modeling: Dimensional Modeling, ER Diagrams Scripting: Python or Perl (Preferred) Agile Environment, Git-based Version Control Strong Communication and Documentation
Posted 1 month ago
5.0 - 9.0 years
7 - 17 Lacs
Pune
Work from Office
Job Overview: Diacto is seeking an experienced and highly skilled Data Architect to lead the design and development of scalable and efficient data solutions. The ideal candidate will have strong expertise in Azure Databricks, Snowflake (with DBT, GitHub, Airflow), and Google BigQuery. This is a full-time, on-site role based out of our Baner, Pune office. Qualifications: B.E./B.Tech in Computer Science, IT, or related discipline MCS/MCA or equivalent preferred Key Responsibilities: Design, build, and optimize robust data architecture frameworks for large-scale enterprise solutions Architect and manage cloud-based data platforms using Azure Databricks, Snowflake, and BigQuery Define and implement best practices for data modeling, integration, governance, and security Collaborate with engineering and analytics teams to ensure data solutions meet business needs Lead development using tools such as DBT, Airflow, and GitHub for orchestration and version control Troubleshoot data issues and ensure system performance, reliability, and scalability Guide and mentor junior data engineers and developers Experience and Skills Required: 5 to12 years of experience in data architecture, engineering, or analytics roles Hands-on expertise in Databricks , especially Azure Databricks Proficient in Snowflake , with working knowledge of DBT, Airflow, and GitHub Experience with Google BigQuery and cloud-native data processing workflows Strong knowledge of modern data architecture, data lakes, warehousing, and ETL pipelines Excellent problem-solving, communication, and analytical skills Nice to Have: Certifications in Azure, Snowflake, or GCP Experience with containerization (Docker/Kubernetes) Exposure to real-time data streaming and event-driven architecture Why Join Diacto Technologies? Collaborate with experienced data professionals and work on high-impact projects Exposure to a variety of industries and enterprise data ecosystems Competitive compensation, learning opportunities, and an innovation-driven culture Work from our collaborative office space in Baner, Pune How to Apply: Option 1 (Preferred) Copy and paste the following link on your browser and submit your application for the automated interview process : - https://app.candidhr.ai/app/candidate/gAAAAABoRrTQoMsfqaoNwTxsE_qwWYcpcRyYJk7NzSUmO3LKb6rM-8FcU58CUPYQKc65n66feHor-TGdCEfyouj0NmKdgYcNbA==/ Option 2 1. Please visit our website's career section at https://www.diacto.com/careers/ 2. Scroll down to the " Who are we looking for ?" section 3. Find the listing for " Data Architect (Data Bricks) " and 4. Proceed with the virtual interview by clicking on " Apply Now ."
Posted 1 month ago
5.0 - 10.0 years
6 - 15 Lacs
Bengaluru
Work from Office
Urgent Hiring _ Azure Data Engineer with a leading Management Consulting Company @ Bangalore Location. Strong expertise in Databricks & Pyspark while dealing with batch processing or live (streaming) data sources. 4+ relevant years of experience in Databricks & Pyspark/Scala 7+ total years of experience Good in data modelling and designing. Ctc- Hike Shall be considered on Current/Last Drawn Pay Apply - rohita.robert@adecco.com Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill
Posted 1 month ago
9.0 - 12.0 years
25 - 35 Lacs
Hyderabad
Hybrid
Experience 12yrs only Notice period :- immediate / 15days Location :- Hyderabad Client :- Tech Star Group Please highlight the mandatory sill in resume . Client Feedback :- In short, the client is primarily looking for a candidate with strong expertise in data-related skills, including: SQL & Database Management: Deep knowledge of relational databases (PostgreSQL), cloud-hosted data platforms (AWS, Azure, GCP), and data warehouses like Snowflake. ETL/ELT Tools: Experience with SnapLogic, StreamSets, or DBT for building and maintaining data pipelines. / ETL Tools Extensive Experience on data Pipelines Data Modeling & Optimization: Strong understanding of data modeling, OLAP systems, query optimization, and performance tuning. Cloud & Security: Familiarity with cloud platforms and SQL security techniques (e.g., data encryption, TDE). Data Warehousing: Experience managing large datasets, data marts, and optimizing databases for performance. Agile & CI/CD: Knowledge of Agile methodologies and CI/CD automation tools. Imp :-The candidate should have a strong data engineering background with hands-on experience in handling large volumes of data, data pipelines, and cloud-based data systems
Posted 1 month ago
6.0 - 11.0 years
8 - 13 Lacs
Gurugram
Work from Office
About the Role: Grade Level (for internal use): 10 Position summary Our proprietary software-as-a-service helps automotive dealerships and sales teams better understand and predict exactly which customers are ready to buy, the reasons why, and the key offers and incentives most likely to close the sale. Its micro-marketing engine then delivers the right message at the right time to those customers, ensuring higher conversion rates and a stronger ROI. What You'll Do You will be part of our Data Platform & Product Insights data engineering team. As part of this agile team, you will work in our cloud native environment to Build & support data ingestion and processing pipelines in cloud. This will entail extraction, load and transformation of big data from a wide variety of sources, both batch & streaming, using latest data frameworks and technologies Partner with product team to assemble large, complex data sets that meet functional and non-functional business requirements, ensure build out of Data Dictionaries/Data Catalogue and detailed documentation and knowledge around these data assets, metrics and KPIs. Warehouse this data, build data marts, data aggregations, metrics, KPIs, business logic that leads to actionable insights into our product efficacy, marketing platform, customer behaviour, retention etc. Build real-time monitoring dashboards and alerting systems. Coach and mentor other team members. Who you are 6+ years of experience in Big Data and Data Engineering. Strong knowledge of advanced SQL, data warehousing concepts and DataMart designing. Have strong programming skills in SQL, Python/ PySpark etc. Experience in design and development of data pipeline, ETL/ELT process on-premises/cloud. Experience in one of the Cloud providers GCP, Azure, AWS. Experience with relational SQL and NoSQL databases, including Postgres and MongoDB. Experience workflow management toolsAirflow, AWS data pipeline, Google Cloud Composer etc. Experience with Distributed Versioning Control environments such as GIT, Azure DevOps Building Docker images and fetch/promote and deploy to Production. Integrate Docker container orchestration framework using Kubernetes by creating pods, config Maps, deployments using terraform. Should be able to convert business queries into technical documentation. Strong problem solving and communication skills. Bachelors or an advanced degree in Computer Science or related engineering discipline. Good to have some exposure to Exposure to any Business Intelligence (BI) tools like Tableau, Dundas, Power BI etc. Agile software development methodologies. Working in multi-functional, multi-location teams Grade10 LocationGurugram Hybrid Modeltwice a week work from office Shift Time12 pm to 9 pm IST What You'll Love About Us Do ask us about these! Total Rewards. Monetary, beneficial and developmental rewards! Work Life Balance. You can't do a good job if your job is all you do! Prepare for the Future. Academy we are all learners; we are all teachers! Employee Assistance Program. Confidential and Professional Counselling and Consulting. Diversity & Inclusion. HeForShe! Internal Mobility. Grow with us! About automotiveMastermind Who we are: Founded in 2012, automotiveMastermind is a leading provider of predictive analytics and marketing automation solutions for the automotive industry and believes that technology can transform data, revealing key customer insights to accurately predict automotive sales. Through its proprietary automated sales and marketing platform, Mastermind, the company empowers dealers to close more deals by predicting future buyers and consistently marketing to them. automotiveMastermind is headquartered in New York City. For more information, visit automotivemastermind.com. At automotiveMastermind, we thrive on high energy at high speed. Were an organization in hyper-growth mode and have a fast-paced culture to match. Our highly engaged teams feel passionately about both our product and our people. This passion is what continues to motivate and challenge our teams to be best-in-class. Our cultural values of Drive and Help have been at the core of what we do, and how we have built our culture through the years. This cultural framework inspires a passion for success while collaborating to win. What we do: Through our proprietary automated sales and marketing platform, Mastermind, we empower dealers to close more deals by predicting future buyers and consistently marketing to them. In short, we help automotive dealerships generate success in their loyalty, service, and conquest portfolios through a combination of turnkey predictive analytics, proactive marketing, and dedicated consultative services. Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough