Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 8.0 years
5 - 10 Lacs
Pune
Work from Office
Title/Position: Full Stack Quality Engineer Job Location: Pune Employment Type: Full Time Shift Time: UK Shift Job Overview: We are seeking a detail-oriented and highly motivated Quality Assurance Engineer to join our dynamic team. As a QA Engineer, you will play a crucial role in ensuring the quality and reliability of our software products. The ideal candidate will possess strong analytical skills, a keen eye for detail, and a passion for delivering high-quality software solutions. Responsibilities: Develop and execute test plans, test cases, and automation scripts for ETL pipelines and data validation. Perform SQL-based data validation to ensure data integrity, consistency, and correctness preferably RDS. Work closely with data engineers to test and validate Data and frontend implementations. Automate data quality tests using Python and integrate them into CI/CD pipelines. Use GitHub for version control, managing test scripts, and collaborating on automation frameworks. Debug and troubleshoot data-related issues across different environments. Ensure data security, compliance, and governance requirements are met. Collaborate with stakeholders to define and improve testing strategies for big data solutions. Create and update Automation scripts for Frontend and API testing using frameworks like Selenium, Pytest, and others. Requirements: 3+ years of experience in QA, with a focus on data testing, ETL testing, and data validation. Strong proficiency in SQL for data validation, transformation testing, and debugging. Good working knowledge of Python, Node.js,Angular and GitHub for test automation and version control. Experience with ETL , Frontend testing and Data solutions. Understanding data pipelines, Blue framework and data governance concepts. Experience with cloud platforms (AWS) is a plus. Knowledge of PySpark and distributed data processing frameworks is a plus. Familiarity with CI/CD pipelines and test automation frameworks. Strong problem-solving skills and attention to detail. ",
Posted 1 week ago
4.0 - 8.0 years
6 - 10 Lacs
Kochi, Chennai
Work from Office
As a BI/DW Developer (Software Engineer), your responsibilities are to: 1. Design and implement scalable ETL/ELT pipelines using tools like SQL, Python, or Spark. 2. Build and maintain data models and data marts to support analytics and reporting use cases. 3. Develop, maintain, and optimize dashboards and reports using BI tools such as Power BI, Tableau, or Looker. 4. Collaborate with stakeholders to understand data needs and translate them into technical requirements. 5. Perform data validation and ensure data quality and integrity across systems. 6. Monitor and troubleshoot data workflows and reports to ensure accuracy and performance. 7. Contribute to the data platform architecture, including database design and cloud data infrastructure. 8. Mentor junior team members and promote best practices in BI and DW development. Skills we re looking for : Strong proficiency in SQL and experience working with large-scale relational databases (e.g., Snowflake, Redshift, BigQuery, PostgreSQL). Experience with modern ETL/ELT tools such as dbt, Apache Airflow, Talend, Informatica, or custom pipelines using Python. Proven expertise in one or more BI tools (Power BI, Tableau, Looker, etc.). Solid understanding of data warehousing concepts, dimensional modeling, and star/snowflake schemas. Strong problem-solving skills and attention to detail. Good verbal communication skills in English. Delivery focused, a go-getter! Work experience requirements : 4-8 years in BI/DW or Data Engineering roles Exposure to machine learning workflows or advanced analytics is a plus Whom this role is suited for: Available to join asap, and work from the Kochi office Experience in start-up environments preferred Job Location: Carnival Infopark, Kochi Please attach your resume to the mail* We will handle your personnal data as described in our Privacy Policy.
Posted 1 week ago
5.0 - 8.0 years
7 - 10 Lacs
Hyderabad
Work from Office
We are looking for an experienced Senior Data Engineer with a strong foundation in Python, SQL, and Spark , and hands-on expertise in AWS, Databricks . In this role, you will build and maintain scalable data pipelines and architecture to support analytics, data science, and business intelligence initiatives. You ll work closely with cross-functional teams to drive data reliability, quality, and performance. Responsibilities: Design, develop, and optimize scalable data pipelines using Databricks in AWS such as Glue, S3, Lambda, EMR, Databricks notebooks, workflows and jobs. Building data lake in WS Databricks. Build and maintain robust ETL/ELT workflows using Python and SQL to handle structured and semi-structured data. Develop distributed data processing solutions using Apache Spark or PySpark . Partner with data scientists and analysts to provide high-quality, accessible, and well-structured data. Ensure data quality, governance, security, and compliance across pipelines and data stores. Monitor, troubleshoot, and improve the performance of data systems and pipelines. Participate in code reviews and help establish engineering best practices. Mentor junior data engineers and support their technical development. Requirements Bachelors or masters degree in computer science, Engineering, or a related field. 5+ years of hands-on experience in data engineering , with at
Posted 1 week ago
5.0 - 10.0 years
7 - 12 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
At Oracle Health, we put humans at the heart of every conversation. Our mission is to create a human-centric healthcare experience powered by unified global data. As a global leader we re looking for a Data Engineer with BI to join an exciting project for replacing existing Data warehouse systems with the Oracles own data warehouse to manage storage of all the internal corporate data to provide insights that will help our teams to make critical business decisions Join us and create the future! Roles and Responsibilities Proficient in writing and optimising SQL queries for data extraction Translate client requirements to technical design that junior team members can implement Developing the code that aligns with the technical design and coding standards Review design and code implemented by other team members. Recommend better design and efficient code Conduct Peer design and Code Reviews for early detection of defects and code quality Documenting ETL processes and data flow diagrams Optimizing data extraction and transformation processes for better performance Performing data quality checks and debugging issues Conducting root cause analysis for data issues and implementing fixes Collaborating with more experienced developers on larger projects, collaborate with stakeholders on the requirements Participate in the requirements, design and implementation discussions Participating in learning and development opportunities to enhance technical skills Test storage system after transferring the data Exposures to Business Intelligence platforms like OAC, Power BI or Tableau Technical Skills Set : You must be strong in PLSQL concepts such as tables, keys, DDL, DML commands, etc. You need be proficient in writing, debugging complex SQL queries, Views and Stored Procedures. Strong hands on in Python / PySpark programming As a Data Engineer, you must be strong in data modelling, ETL / ELT concepts, programming / scripting Python language, You must be proficient in the following ETL process automation tools Oracle Data Integrator (ODI) Oracle Data Flow Oracle Database / Autonomous Data warehouse Should possess working knowledge in any of the cloud platform like Oracle Cloud (Preferred), Microsoft Azure, AWS You must be able to create technical design, build prototypes, build and maintain high performing data pipelines, optimise ETL pipelines Good knowledge on Business Intelligence development tools like OAC, PowerBI Good to have Microsoft ADF and Data Lakes, Databricks
Posted 1 week ago
6.0 - 8.0 years
8 - 10 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled and motivated Senior Snowflake Developer to join our growing data engineering team. In this role, you will be responsible for building scalable and secure data pipelines and Snowflake-based architectures that power data analytics across the organization. You ll collaborate with business and technical stakeholders to design robust solutions in an AWS environment and play a key role in driving our data strategy forward. Responsibilities Design, develop, and maintain efficient and scalable Snowflake data warehouse solutions on AWS. Build robust ETL/ELT pipelines using SQL, Python, and AWS services (e.g., Glue, Lambda, S3). Collaborate with data analysts, engineers, and business teams to gather requirements and design data models aligned with business needs. Optimize Snowflake performance through best practices in clustering, partitioning, caching, and query tuning. Ensure data quality, accuracy, and completeness across data pipelines and warehouse processes. Maintain documentation and enforce best practices for data architecture, governance, and security. Continuously evaluate tools, technologies, and processes to improve system reliability, scalability, and performance. Ensure compliance with relevant data privacy and security regulations (e.g., GDPR, CCPA). Bachelor s degree in Computer Science, Information Technology, or a related field. Minimum 6 years of experience in data engineering, with at least 3 years of hands-on experience with Snowflake.
Posted 1 week ago
6.0 - 11.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Medables mission is to get effective therapies to patients faster. We provide an end-to-end, cloud-based platform with a flexible suite of tools that allows patients, healthcare providers, clinical research organizations and pharmaceutical sponsors to work together as a team in clinical trials. Our solutions enable more efficient clinical research, more effective healthcare delivery, and more accurate precision and predictive medicine. Our target audiences are patients, providers, principal investigators, and innovators who work in healthcare and life sciences. Our vision is to accelerate the path to human discovery and medical cures. We are passionate about driving innovation and empowering consumers. We are proactive, collaborative, self-motivated learners, committed, bold and tenacious. We are dedicated to making this world a healthier place. 1. Responsibilities Identify opportunities to build data science solutions that address complex business problems Lead cross-functional teams in the implementation of Data Services tools Support and lead the creation of relevant dummy data to test algorithms based on clinical study protocols and client needs Collaborate with internal stakeholders to implement data standardization across our products Communicate key insights and findings Lead/Collaborate on deep-dives to identify root causes of data issues in systems owned by Data Services/Other Departments Represent Data Science/Data Services at internal and external meetings with no oversight from Data Services leadership Mentor junior team members to drive development of the Data Services team to always produce High Quality Data Services deliverables Other duties as assigned 2. Experience 6+ years working with data analyses/reports to provide data-driven conclusions / direction, of which at least 50% is directly related to clinical trials (especially DCTs) within the pharmaceutical industry Practical experience in defining and maintaining Data Quality/Integrity Demonstrated expert knowledge of the role that decentralized trials (DCTs) and data play in clinical drug development Experience in leading cross-functional teams and representation of the data component Ability to read and interpret clinical study protocols and other internal/external documents that define products / solutions / implementation plans, particularly where data form the key component of the deliverable 3. Skills High attention to detail in order to draw insightful and data-driven business-relevant conclusions Excellent communication and presentation skills with experience presenting to both internal and external stakeholders Critical thinking and problem-solving skills 4. Education, Certifications, Licenses Bachelor s Degree in related filed i.e. Data Science, Computer Science or Mathematics 5. Travel Requirements As required At Medable, we believe that our team of Medaballers is our greatest asset. That is why we are committed to your personal and professional well-being. Our rewards are more than just benefits - they demonstrate our commitment to providing an inclusive, healthy and rewarding experience for all our team members. Flexible Work Remote from the start, we believe in a flexible employee experience Compensation Competitive base salaries Annual performance-based bonus Stock options for employees, aligning personal achievements to Medables success Health and Wellness Comprehensive medical, dental, and vision insurance coverage Carrot Fertility Program Health Saving Accounts (HSA) and Flexible Spending Accounts (FSA) Wellness program (Mental, Physical and Financial) Recognition Peer-to-peer recognition program, celebrating achievements and milestones Community Involvement Volunteer time off to support causes you care about Medable is committed to providing reasonable accommodations for qualified individuals with disabilities in our job application procedures. If you need assistance or would like to request an accommodation due to a disability, please contact us at hr@medable.com .
Posted 1 week ago
8.0 - 13.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Are you passionate about building scalable data systems and driving data quality across complex ecosystems? Join Derisk360 to work on advanced cloud and data engineering initiatives that power intelligent business decision-making. What You ll Do: Work with a broad stack of AWS services: S3, AWS Glue, Glue Catalog, Lambda, Step Functions, EventBridge , and more. Develop and implement robust data quality checks using DQ libraries. Lead efforts in data modeling and manage relational and NoSQL databases. Build and automate ETL workflows using Unix scripting DevOps and Agile methodologies , including use of CI/CD tools and code repositories. Engineer scalable big data solutions with Apache Spark Design impactful dashboards using Amazon QuickSight Microsoft Power BI Work extensively on Integrate real-time data pipelines with data sourcing strategies, including real-time integration solutions. Spearhead cloud migration efforts to Azure Data Lake , including data transitions from on-premise environments. What You Bring: 8+ years of hands-on experience in data engineering roles. Proficiency in AWS cloud services and modern ETL technologies Solid programming experience in Strong understanding of data architecture quality frameworks reporting tools Experience working in Agile environments and using version control/CI pipelines. Exposure to big data frameworks real-time integration tools cloud data platforms What You ll Get: Competitive compensation. Lead and contribute to mission-critical data engineering projects. Work in a high-performance team at the intersection of Continuous learning environment with access to cutting-edge technologies.
Posted 1 week ago
8.0 - 13.0 years
25 - 30 Lacs
Hyderabad
Work from Office
About the Role We are hiring a Staff Data Engineer to join our India Operations and play a crucial role in our mission to establish a world-class data engineering team within the Center for Data and Insights (CDI). Reporting directly to the Director of Data Engineering, you will be a key contributor, advancing our data engineering capabilities in the AWS and GCP ecosystems. Your responsibilities include collaborating with key stakeholders, guiding and mentoring fellow data engineers, and working hands-on in various domains such as data architecture, data lake infrastructure, data and ML job orchestration. Your contributions will ensure the consistency and reliability of data and insights, aligning with our objective of enabling well-informed decision-making. The ideal candidate will demonstrate an empathetic and service-oriented approach, fostering a thriving data and insights culture while enhancing and safeguarding our data infrastructure. This role presents a unique opportunity to build and strengthen our data engineering platforms at a global level. If you are an experienced professional with a passion for impactful data engineering initiatives and a commitment to driving transformative changes, we encourage you to explore this role. Joining us as a Staff Data Engineer allows you to significantly contribute to the trajectory of our CDI, making a lasting impact on our data-centric aspirations as we aim for new heights. Core Areas of Responsibility Implement robust data infrastructure, platforms, and solutions. Collaborate effectively with cross functional teams & CDI leaders, by ensuring the delivery of timely data load and jobs tailored to their unique needs. Guide & mentor the team of skilled data engineers, by prioritizing a service-oriented approach and quick response times. Advocate for the enhancement, and adherence to high data quality standards, KPI certification methods, and engineering best practices. Approach reporting platforms and analytical processes with innovative thinking, considering the evolving demands of the business. Implement the strategy for migrating from AWS to GCP with near real time events, machine learning pipelines using our customer data platform (Segment) and purpose built pipelines and DBs to activate systems of intelligence. Continuously improve reporting workflows and efficiency, harnessing the power of automation whenever feasible. Enhance the performance, reliability, and scalability of storage and compute layers of the data lake. About You We get excited about candidates, like you, because... 8+ years of hands-on experience in data engineering and/or software development. Highly skilled in programming languages like Python, Spark & SQL Comfortable using BI tools like Tableau, Looker, Preset, and so on Proficient in utilizing event data collection tools such as Snowplow, Segment, Google Tag Manager, Tealium, mParticle, and more. Comprehensive expertise across the entire lifecycle of implementing compute and orchestration tools like Databricks, Airflow, Talend, and others. Skilled in working with streaming OLAP engines like Druid, ClickHouse, and similar technologies. Experience leveraging AWS services including EMR Spark, Redshift, Kinesis, Lambda, Glue, S3, and Athena, among others. Nice to have exposure to GCP services like BigQuery, Google Storage, Looker, Google Analytics, and so on Good understanding of building real-time data systems as well AI/ML personalization products Experience with Customer Data Platforms (CDPs) and Data Management Platforms (DMPs), contributing to holistic data strategies. Familiarity with high-security environments like HIPAA, PCI, or similar contexts, highlighting a commitment to data privacy and security. Accomplished in managing large-scale data sets, handling Terabytes of data and billions of records effectively. You holds a Bachelors degree in Computer Science, Information Systems, or a related field, providing a strong foundational knowledge
Posted 1 week ago
8.0 - 13.0 years
25 - 30 Lacs
Hyderabad
Work from Office
Why We Work at Dun & Bradstreet Dun & Bradstreet unlocks the power of data through analytics, creating a better tomorrow. Each day, we are finding new ways to strengthen our award-winning culture and accelerate creativity, innovation and growth. Our 6,000+ global team members are passionate about what we do. We are dedicated to helping clients turn uncertainty into confidence, risk into opportunity and potential into prosperity. Bold and diverse thinkers are always welcome. Come join us! Learn more at dnb.com/careers . Develop, maintain, and analyze datasets from diverse sources, including mobile and web, government agencies, web crawls, social media, and proprietary datasets, to create insights for our clients, power our platform, and create an innovative market understanding. Create designs and share ideas for creating and improving data pipelines and tools. This role will support maintaining our existing data pipelines and building new pipelines for increased customer insights Key Responsibilities: Collaborate with cross-functional teams to identify and design requirements for advanced systems with respect to processing, analyzing, searching, visualizing, developing, and testing vast datasets to ensure data accuracy. Implement business requirements by collaborating with stakeholders. Become familiar with existing application code and achieve a complete understanding of how the applications function. Maintain data quality by writing validation tests. Understand variety of unique data sources. Create and document data documentation, including, processing systems and flow diagrams. Help maintain existing systems, including troubleshooting and resolving alerts. Expected to meet critical project deadlines. Excellent organizational, analytical, and decision-making skills. Excellent verbal, written, and interpersonal communication skills. Capable of working collaboratively and independently. Share ideas across teams to spread awareness and use of frameworks and tooling. Show an ownership mindset in everything you do; be a problem solver, be curious and be inspired to take action, be proactive, seek ways to collaborate and connect with people and teams in support of driving success. Continuous growth mindset, keep learning through social experiences and relationships with stakeholders, experts, colleagues and mentors as well as widen and broaden your competencies through structural courses and programs. Key Skills: 8+ years experience in a data analysis, visualization, and manipulation. Extensive experience working with GCP services, including Big Query, Dataflow, Pub/Sub, Cloud Storage, Cloud Run, Cloud Functions and related technologies. Extensive experience with SQL and relational databases, including optimization and design. Experience with Amazon Web Services (EC2, RDS, S3, Redshift, EMR, and more). Experience with OS level scripting (bash, sed, awk, grep, etc.). Experience in AdTech, web cookies, and online advertising technologies is a plus. Testable and efficient Python coding for data processing and analysis. Familiarity with parallelization of applications on a single machine and across a network of machines. Expertise in containerized infrastructure and CI/CD systems, including CloudBuild, Docker, Kubernetes, Harness, and GitHub Actions. Experience with version control tools such as GIT, Github, and BitBucket. Experience with Agile Project Management tools such as Jira and Confluence. Experience with object-oriented programming, functional programming a plus. Analytic tools and ETL/ELT/data pipeline frameworks a plus. Experience with data visualization tools like Looker, Tableau, or Power BI. Experience working with global remote teams. Knowledge of data transformation processes. Google Cloud certification a plus. Proficiency in Microsoft Office Suite. Fluency in English and languages relevant to the team. This position is internal titled as Senior Software Engineer All Dun & Bradstreet job postings can be found at https: / / www.dnb.com / about-us / careers-and-people / joblistings.html and https://jobs.lever.co/dnb . Official communication from Dun & Bradstreet will come from an email address ending in @dnb.com. .
Posted 1 week ago
8.0 - 13.0 years
25 - 30 Lacs
Hyderabad
Work from Office
We are seeking a seasoned Data Engineering Manager with 8+ years of experience to lead and grow our data engineering capabilities. This role demands strong hands-on expertise in Python, SQL, Spark , and advanced proficiency in AWS and Databricks . As a technical leader, you will be responsible for architecting and optimizing scalable data solutions that enable analytics, data science, and business intelligence across the organization. Key Responsibilities: Lead the design, development, and optimization of scalable and secure data pipelines using AWS services such as Glue, S3, Lambda, EMR , and Databricks Notebooks , Jobs, and Workflows. Oversee the development and maintenance of data lakes on AWS Databricks , ensuring performance and scalability. Build and manage robust ETL/ELT workflows using Python and SQL , handling both structured and semi-structured data. Implement distributed data processing solutions using Apache Spark / PySpark for large-scale data transformation. Collaborate with cross-functional teams including data scientists, analysts, and product managers to ensure data is accurate, accessible, and well-structured. Enforce best practices for data quality, governance, security , and compliance across the entire data ecosystem. Monitor system performance, troubleshoot issues, and drive continuous improvements in data infrastructure. Conduct code reviews, define coding standards, and promote engineering excellence across the team. Mentor and guide junior data engineers, fostering a culture of technical growth and innovation. Requirements 8+ years of experience in data engineering with proven leadership in managing data projects and teams. Expertise in Python, SQL, Spark (PySpark) ,
Posted 1 week ago
9.0 - 12.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Location: Bangalore Experience: 9 - 12 Years Notice Period: Immediate to 15 Days Overview We are seeking an experienced and highly skilled SAP Data Migration Lead to join our team in Bangalore. In this role, you will be responsible for leading the end-to-end data migration strategy and execution across SAP platforms, including S/4HANA implementations. You will collaborate with cross-functional teams to ensure data integrity, accuracy, and completeness throughout the lifecycle of the migration. This is a leadership role requiring deep technical expertise in SAP data conversion tools and methodologies, along with strong project management and stakeholder engagement skills. You will be instrumental in driving successful data transitions, maintaining high-quality standards, and fostering a culture of continuous improvement. Key Responsibilities Lead the planning, execution, and delivery of complex SAP data migration projects. Define and implement data migration strategies aligned with project goals and timelines. Work closely with functional and technical teams to gather data requirements and develop migration mappings and rules. Manage data extraction, transformation, and loading (ETL) activities using tools like SAP Data Services, Cransoft DSP, LSMW , and IDoc . Oversee the use of S/4HANA Migration Cockpit and Migration Object Modeler for streamlined data transfers. Ensure data quality by establishing and enforcing validation rules and reconciliation processes. Lead and mentor the data migration team; provide training and best practice guidelines. Monitor and report on data quality metrics and remediation plans. Collaborate with business stakeholders to ensure alignment on deliverables and resolve data-related issues promptly. Optimize data migration processes for scalability, accuracy, and repeatability. Requirements Bachelor s degree in Computer Science, Information Technology, or a related field. 9 to 12 years of total experience, with at least 5+ years in IT and 5 to 8 years specifically in SAP Data Conversion . Must have completed at least 3 full-cycle SAP implementations . Proficiency in SAP Data Services , BackOffice Cransoft/DSP , LSMW , and IDoc processing. Strong hands-on experience with S/4HANA Migration Cockpit and ADM platform methodologies . Deep understanding of SAP ECC modules including SD, MM, FICO, PP, QM, and PM . Proven ability in data modeling, analysis, planning , and harmonization across platforms. Excellent communication (verbal and written) and leadership skills. Ability to manage stakeholder expectations and drive team collaboration. Key Skills SAP Data Migration strategy and leadership SAP Data Services, Cransoft DSP, LSMW, IDoc S/4HANA Migration Cockpit, Migration Object Modeler ADM platform methodologies and components Data quality management and dashboard reporting SAP ECC module knowledge (SD, MM, FICO, etc.) ETL tools and best practices Strong problem-solving, planning, and mentoring abilities Why Join Us? At NCG , we foster a culture of innovation, ownership, and continuous learning. This is an opportunity to take on a leadership role in a forward-thinking organization where your contributions will shape mission-critical digital transformations. Join us and make a meaningful impact while advancing your career in SAP data migration.
Posted 1 week ago
8.0 - 12.0 years
12 - 16 Lacs
Mumbai
Work from Office
Operational Risk Data Management. Job Summary / Objective. Act as a strategic advisor and engagement lead, providing executive oversight and direction for the client’s OCC-driven data remediation initiatives. Ensure alignment of data management and governance and quality improvement strategies with regulatory requirements and business objectives.. Key Responsibilities / Duties. Define and communicate the strategic vision for data governance remediation to client executives.. Guide the client in modernizing data architecture, risk aggregation, and regulatory reporting processes.. Advise on development and enforcement of enterprise-wide data policies, standards, and controls.. Support executive and Board-level reporting and engagement with OCC or other regulators.. Lead efforts to foster a culture of data accountability and continuous improvement within the client organization.. Required Skill Sets & Requirements. Enterprise Data Analysis and Management:. Extensive experience designing and implementing data analysis and management programs in large financial institutions.. Strong understanding of data quality metrics, master data management, and metadata management.. Regulatory & Risk Management:. Experience in Operational risk domains including but not limited to Data risk, Fraud risk, Tech risk, Cyber risk, Op resiliency risk, third party risk, Processing risk, Services and Enterprise ops risk, Regulatory management reporting and financial statement reporting risk.. Responsibilities include requirements gathering, data acquisition, data quality assessment, and building risk monitoring tools. Deep knowledge of regulatory frameworks (BCBS 239) and experience supporting regulatory remediation.. Technical & Analytical:. Programing proficiency in Python, SQL and reporting tools like Tableau, PowerBI, and Jira. Experience guiding IT modernization, system integration, and process optimization.. Advanced problem-solving, decision-making, and client advisory skills.. Communication & Board Reporting:. Excellent communication, negotiation, and presentation skills with demonstrated experience in Board-level engagement.. Qualifications. Master’s or advanced degree preferred.. 6+ years’ experience in consulting or executive roles in financial services.. Professional certifications (CDMP, PMP) highly desirable.. ORM-Level 1 Support experience required. Indian Passport with 1 Year Validity Mandatory. Show more Show less
Posted 1 week ago
1.0 - 5.0 years
7 - 11 Lacs
Bengaluru
Work from Office
About Us. At ANZ, we're shaping a world where people and communities thrive, driven by a common goal: to improve the financial wellbeing and sustainability of our millions of customers.. About The Role. Role Location : Manyata Technology Park, Bangalore. We are looking for a talented Data Analyst to join our Australia Data tribe within the Australia Retail division. This role focusing on harnessing the power of data to generate insights, manage information, and support decision-making.. As a Data Analyst, you will help foster a data-driven culture by developing analytics capabilities, promoting knowledge sharing, and ensuring adherence to standards, governance, and continuous learning across the tribe.. In this role, you will support Customer Service and Operations (CSO) Analytics initiatives, focusing on Home Lending and Work ware Management data. What will your day look like?. Solve complex business challenges by analysing large datasets, defining data roadmaps, and sourcing insights from multiple sources.. Understand and manage the complete analytics process, from data collection to delivering actionable insights.. Promote a fact-based decision-making, problem solving culture and present insights to drive innovation and improve propositions across the organization.. Work closely with business stakeholders to understand their needs, challenges, and goals and optimize strategies to enhance performance.. Collaborate with other data analysts and data practitioners to define data requirements and ensure the delivery of impactful, data-driven solutions.. What will you bring?. 7 plus years of experience in data domain with expertise as Data Analyst.. Data Wrangling – Manages large, structured data from multiple sources, applying Structured Queries to transform, join, and extract data for enhanced analysis.. Descriptive and Diagnostic Analytics –Analyses advanced trends in complex data, drawing insights from diverse sources to solve critical business challenges and develop prescriptive analytics.. Data Visualization Creates sophisticated visualizations, drawing insights, and automating data visuals for decision-making using Qlik/Tableau. Risk & Issue Management, Agile Practices & Ways of Working: Independently assesses risks and applies data security principles while leading Agile teams, optimizing workflows, and coaching on best practices.. Data Modelling – Handles complex data structures, building trust in data reliability and ensuring accurate connections for advanced analytics.. Soft skills Problem Solving, Communication, Collaboration, Critical Thinking, Data Storytelling. Data Quality & Validation Checks Independently creates and applies checks to ensure accurate, complete data.. Proficient in Python, Airflow and Git for workflow automation and version control.. Familiar with cloud platforms (AWS, Azure, GCP) for efficient data management and processing.. Expertise in complex SQL queries for large-scale data analysis and reporting.. You’re not expected to have 100% of these skills. At ANZ a growth mindset is at the heart of our culture, so if you have most of these things in your toolbox, we’d love to hear from you.. So why join us?. ANZ is a place where big things happen as we work together to provide banking and financial services across more than 30 markets. With more than 7,500 people, our Bengaluru team is the bank's largest technology, data and operations centre outside Australia. In operation for over 33 years, the centre is critical in delivering the bank's strategy and making an impact for our millions of customers around the world. Our Bengaluru team not only drives the transformation initiatives of the bank, it also drives a culture that makes ANZ a great place to be. We're proud that people feel they can be themselves at ANZ and 90 percent of our people feel they belong.. We know our people need different things to be great in their role, so we offer a range of flexible working options, including hybrid work (where the role allows it). Our people also enjoy a range of benefits including access to health and wellbeing services.. We want to continue building a diverse workplace and welcome applications from everyone. Please talk to us about any adjustments you may require to our recruitment process or the role itself. If you are a candidate with a disability or access requirements, let us know how we can provide you with additional support.. To find out more about working at ANZ visit https://www.anz.com/careers/ . You can apply for this role by visiting ANZ Careers and searching for reference number 95729.. Job Posting End Date. 19th May 2025, 11.59pm, (Melbourne Australia). Show more Show less
Posted 1 week ago
8.0 - 13.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Experience: 3+ Years. As a Senior Data Engineer, you’ll build robust data pipelines and enable data-driven decisions by developing scalable solutions for analytics and reporting. Perfect for someone with strong database and ETL expertise.. Job Responsibilities-. Design, build, and maintain scalable data pipelines and ETL processes.. Work with large data sets from diverse sources.. Develop and optimize data models, warehouses, and integrations.. Collaborate with data scientists, analysts, and product teams.. Ensure data quality, security, and compliance standards.. Qualifications-. Proficiency in SQL, Python, and data pipeline tools (Airflow, Spark).. Experience with data warehouses (Redshift, Snowflake, BigQuery).. Knowledge of cloud platforms (AWS/GCP/Azure).. Strong problem-solving and analytical skills.. Show more Show less
Posted 1 week ago
8.0 - 12.0 years
18 - 22 Lacs
Bengaluru
Work from Office
Date 12 Jun 2025 Location: Bangalore, KA, IN Company Alstom As a promoter of sustainable mobility, Alstom develops and markets systems, equipment and services for the transport sector. Alstom offers a complete range of solutions (from high-speed trains to metros, tramways and e-buses), passenger solutions, customised services (maintenance, modernisation), infrastructure, signalling and digital mobility solutions. Alstom is a world leader in integrated transport systems. The company recorded sales of 7.3 billion and booked 10.0 billion of orders in the 2016/17 fiscal year. Headquartered in France, Alstom is present in over 60 countries and employs 32,800 people. UNIFE report forecasts India's accessible market at 4B over 2016-18, with growth of 6.6%. Alstom has established a strong presence in India and is currently executing metro projects in several Indian cities including Chennai, Kochi and Lucknow where it is supplying Rolling Stock manufactured out its state of the art facility at SriCity in Andhra Pradesh. In the Mainline space, Alstom is executing Signaling & Power Supply Systems for the 343 Km. section on World Bank funded Eastern Dedicated Freight Corridor. Construction of the new electric locomotive factory for manufacturing and supply of 800 units of high horse power locomotives is also in full swing at Madhepura in Bihar. Alstom has set up an Engineering Centre of Excellence in Bengaluru, and this coupled with a strong manufacturing base as well as localized supply chains, is uniquely positioned to serve customers across the globe. Today, Alstom in India employs close to 3000 people and in line with Government of Indias Make in India policy initiative, Alstom has been investing heavily in the country in producing world class rolling stock, components, design, research and development to not only serve the domestic market, but also rest of the world. OVERALL PURPOSE OF THE ROLE: The purpose of this role to build inhouse technical expertise for data integration area and delivery of data integration services for the platform. Primary Goals and Objectives- This role should be responsible for delivery model for Data Integration services. Person should be responsible for building technical expertise on data integration solutions and providing data integration services. The role is viewed as an expert in solution design, development, performance tuning and troubleshooting for data integration. RESPONSIBILITIES: Technical - Hands-on-experience architecting and delivering solutions related to enterprise integration, APIs, service-oriented architecture, and technology modernizations 3-4 years hands-on experience with the design, and implementation of integrations in the area of Dell Boomi Understanding the Business requirements and Functional requirement Documents and Design a Technical Solution as per the needs Person should be good with Master Data Management, Migration and Governance best practices Extensive data quality and data migration experience including proficiency in data warehousing, data analysis and conversion planning for data migration activities Lead and build data migration objects as needed for conversions of data from different sources Should have architected integration solutions using Dell Boomi for cloud, hybrid and on-premise integration landscapes Ability to build and architect a high performing, highly available, highly scale Boomi Molecule Infrastructure In depth understanding of enterprise integration patterns and prowess to apply them in the customers IT landscape Assists project teams during system design to promote the efficient re-use of IT assets Advises project team during system development to assure compliance with architectural principles, guidelines and standards Adept in building the Boomi processes with Error handling and email alerts logging best practices Should be proficient in using Enterprise level and Database connectors Extensive data quality and data migration experience including proficiency in data warehousing, data analysis and conversion planning for data migration activities Excellent understanding on REST with in-depth understanding on how Boomi processes can expose consume services using the different http methods, URI and Media type Understand Atom, Molecule, Atmosphere Configuration and Management, Platform Monitoring, Performance Optimization Suggestions, Platform Extension, User Permissions Control Skills. Knowledge on API governance and skills like caching, DB management and data warehousing Should have hands on experience in configuring AS2, https, SFTP involving different authentication methods Thorough knowledge on process deployment, applying extensions, setting up schedules, Web Services user management process filtering and process reporting Should be expert with XML and JSON activities like creation, mapping and migrations Person should have worked on integration on SAP, SuccessFactors, Sharepoint, cloud-based apps, Web applications and engineering application Support and resolve issues related to data integration deliveries or platform Project Management Person should deliver Data Integration projects using data integration platform Manage partner deliveries by setting up governance of their deliveries Understand project priorities, timelines, budget, and deliverables and the need to proactively push yourself and others to achieve project goals Managerial: Person is individual contributor and operationally managing small technical team Qualifications & Skills: 10+ years of experience in the area of enterprise integrations Minimum 3-4 years of experience with Dell boomi Should have working experience with database like sql server, Data warehousing Hands on experience on REST, SOAP, XML, JSON, SFTP, EDI Should have worked on integration of multiple technologies like SAP, Web, cloud based apps. EDUCATIONB.E. BEHAVIORAL COMPETENCIES: Demonstrate excellent collaboration skills as person will be interacting with multiple business units, Solution managers and internal IT teams Should have excellent analytical and problem solving skills Coaches, supports and trains other team membres You demonstrate excellent communication skills TECHNICAL COMPETENCIES & EXPERIENCE Technical expertise in Delll Boomi for data integration is MUST. Language Skills: English IT Skills: Dell Boomi, SQL, REST APIs, EDI, JSON, XML Location for the roleTravelIf yes, how much (%) - Bangalore. 5%. Contract Type/ Bonus (OPTIONAL) Alstom is committed to create a diverse & international working environment, that reflects the future of our industry, our clients and end-users. As an employee, you will have a unique opportunity to continue to build your career and directly contribute to the expanding growth of the global transport industry Job TypeExperienced
Posted 1 week ago
8.0 - 13.0 years
25 - 30 Lacs
Gurugram
Work from Office
You Lead the Way. We ve Got Your Back. Join the exciting journey of establishing the new India Data Office at American Express India! This dynamic function will play a pivotal role in harnessing third-party data and transforming it to fuel priority use cases, regulatory reporting, driving innovation and growth. With a strong focus on Data Management and Governance, youll ensure compliance with American Expresss data, risk, and privacy policies while collaborating closely with Business, Technology, and 3rd Party teams to launch cutting-edge products in the Indian market. The India Data Office will be accountable to Amex s International Credit Services Data Office. Be part of a team that is shaping the future of data at American Express India and making a significant impact! How will you make an impact in this role? Skilled Manager Data Management to manage data ingestion/transformation products determined in consultation with business teams, use case owners and external service providers Manage data migration/ingestion products (i.e., data pipelines, essential data quality and controls such as selected CDEs, BnC) determined in consultation with business, use case owners and third-party vendor. Ensure appropriate user access, data quality, integrity, and compliance with regulatory requirements. Leading optimization of data product backlogs, efficiently translate business needs into requirements on Rally and articulate it clearly to the scrum teams. Stakeholder management and collaboration across a wide range of partners including Product, Technology and Governance. End to End program management including handling project status, managing, and raising risks and issues. Managing data transformation data products (i.e., data transformation routines and support use case owners map their requirements to Lumi SOR tables). Leading a team of data engineers and scientists to drive modernization of Individual platforms with the target to improve the quality and availability of data and linkages for Individual Entities Engaging with use case owners, product managers and partners to ensure smooth delivery of end-to-end product and capability, identifying needs, opportunities, and gaps Minimum Qualifications 8+ years Data Management and/or Product Owner in building and launching data capabilities. Bachelor s or master s degree in information technology, Computer Science, Information security, Mathematics, Statistics, or any other relevant qualification Prior experience with third parties required. Experience with data pipelines, ETL/ELT, data warehousing and cloud-based platforms. Strong leadership experience in leading/creating high performing teams with diverse skills. Strong quantitative skills with hands on experience in analyzing large amounts of data and data flows to identify patterns/insights and generate useful recommendations with high value. Ability to compile, summarize, communicate, and present findings with senior leadership. Experience in launching software or services in partnership with engineering teams and high degree of proficiency in prototyping, iterative development, understanding of Agile principles. Preferred Qualifications Domain knowledge of Payment Card business (Accounts receivables, Loyalty, AML etc.) preferred. :
Posted 1 week ago
2.0 - 6.0 years
3 - 7 Lacs
Chennai
Work from Office
Career Area: Technology, Digital and Data : Your Work Shapes the World at Caterpillar Inc. When you join Caterpillar, you'rejoining a global team who cares not just about the work we do but also about each other. We are the makers, problem solvers, and future world builders who are creating stronger, more sustainable communities. We don'tjust talk about progress and innovation here we make it happen, with our customers, where we work and live. Together, we are building a better world, so we can all enjoy living in it. Cat Digital is the digital and technology arm of Caterpillar Inc., responsible for bringing world-class digital capabilities to our products and services. With almost one million connected assets worldwide, we're focused on using IoT and other data, technology, advanced analytics and AI capabilities to help our customers build a better world. This is position is in the Connected Data Quality team in Cat Digital. The team is responsible for building tools, dashboards, and processes to enable (E2E) telemetry data quality monitoring, finding sources of quality issues, and working with process partners to resolve the problems at source. JOB PURPOSE:To identify and find solutions to telemetry data failures and, coordinate and communicate quality issues to CPI teams and process partners to drive improvements. JOB DUTIESIdentify telemetry data quality issues by data mining and analysis. development for telemetry data business rules and dashboards, work with internal teams to get buy-in on requirements, implementation, and testing. Lead validation of business rules, defect triage, and monitoring of telemetry data for production and field follow. Investigates hardware and software issues and provides technical information for the continuous product improvement (CPI) process. Assesses the impact of field product problems on the company and the customer. Ability to work with global teams/different time zones and backgrounds. BACKGROUND/EXPERIENCE: This position typically requires an accredited engineering degree. 5-7 years of development or product support experience in IoT/ telemetry systems Must have demonstrated excellent troubleshooting skills and the ability to work effectively in a high-velocity environment. Requires the ability to effectively communicate technical information to a broad range of audiences. The incumbent must have strong initiative, interpersonal, organizational, and teamwork skills. Top candidates will haveKnowledge of Caterpillar telemetry devices and telemetry data. Previous knowledge of Caterpillar products, working with a different unit, or working with different products, processes or systems is desirable. Knowledge of Tableau/BI tools, scripting to automate data analysis, and SQL query development. Experience with handling large data sets in various formats. Have Cat Machine system embedded SW development experience particularly in VIMS area. Skill DescriptorsDecision Making and Critical ThinkingKnowledge of the decision-making process and associated tools and techniques; ability to accurately analyze situations and reach productive decisions based on informed judgment.Level Working KnowledgeApplies an assigned technique for critical thinking in a decision-making process. Identifies, obtains, and organizes relevant data and ideas. Participates in documenting data, ideas, players, stakeholders, and processes. Recognizes, clarifies, and prioritizes concerns. Assists in assessing risks, benefits and consideration of alternatives. Effective CommunicationsUnderstanding of effective communication concepts, tools and techniques; ability to effectively transmit, receive, and accurately interpret ideas, information, and needs through the application of appropriate communication behaviors.Level Working KnowledgeDelivers helpful feedback that focuses on behaviors without offending the recipient. Listens to feedback without defensiveness and uses it for own communication effectiveness. Makes oral presentations and writes reports needed for own work. Avoids technical jargon when inappropriate. Looks for and considers non-verbal cues from individuals and groups. This is intended as a general guide to the job duties for this position and is intended for the purpose of establishing the specific salary grade. It is not designed to contain or be interpreted as an exhaustive summary of all responsibilities, duties and effort required of employees assigned to this job. At the discretion of management, this description may be changed at any time to address the evolving needs of the organization. It is expressly not intended to be a comprehensive list of essential job functions as that term is defined by the Americans with Disabilities Act. Posting Dates: June 16, 2025 - June 29, 2025 Caterpillar is an Equal Opportunity Employer. Not ready to applyJoin our Talent Community.
Posted 1 week ago
2.0 - 5.0 years
30 - 35 Lacs
Bengaluru
Work from Office
YOUR IMPACT Are you passionate about developing mission-critical, high quality software solutions, using cutting-edge technology, in a dynamic environment We are Compliance Engineering, a global team of more than 300 engineers and scientists who work on the most complex, mission-critical problems. We: build and operate a suite of platforms and applications that prevent, detect, and mitigate regulatory and reputational risk across the firm. have access to the latest technology and to massive amounts of structured and unstructured data. leverage modern frameworks to build responsive and intuitive UX/UI and Big Data applications. Compliance Engi neering is looking to fill several big data software engineering roles Your first deliverable and success criteria will be the deployment, in 2025, of new complex data pipelines and surveillance models to detect inappropriate trading activity. HOW YOU WILL FULFILL YOUR POTENTIAL As a member of our team, you will: partner globally with sponsors, users and engineering colleagues across multiple divisions to create end-to-end solutions, learn from experts, leverage various technologies including; Java, Spark, Hadoop, Flink, MapReduce, HBase, JSON, Protobuf, Presto, Elastic Search, Kafka, Kubernetes be able to innovate and incubate new ideas, have an opportunity to work on a broad range of problems, including negotiating data contracts, capturing data quality metrics, processing large scale data, building surveillance detection models, be involved in the full life cycle; defining, designing, implementing, testing, deploying, and maintaining software systems across our products. QUALIFICATIONS A successful candidate will possess the following attributes: A Bachelors or Masters degree in Computer Science, Computer Engineering, or a similar field of study. Expertise in java, as we'll as proficiency with databases and data manipulation. Experience in end-to-end solutions, automated testing and SDLC concepts. The ability (and tenacity) to clearly express ideas and arguments in meetings and on paper. Experience in the some of following is desired and can set you apart from other candidates : developing in large-scale systems, such as MapReduce on Hadoop/Hbase, data analysis using tools such as SQL, Spark SQL, Zeppelin/Jupyter, API design, such as to create interconnected services, knowledge of the financial industry and compliance or risk functions, ability to influence stakeholders.
Posted 1 week ago
5.0 - 10.0 years
5 - 10 Lacs
Nagpur, Hyderabad, Pune
Work from Office
As a Senior Technical Consultant you will participate in all aspects of the software development lifecycle which includes estimating, technical design, implementation, documentation, testing, deployment and support of application developed for our clients. As a member working in a team environment you will take direction from solution architects and Leads on development activities. Perficient is always looking for the best and brightest talent and we need you! we're a quickly-growing, global digital consulting leader, and we're transforming the world s largest enterprises and biggest brands. you'll work with the latest technologies, expand your skills, and become a part of our global community of talented, diverse, and knowledgeable colleagues. 4+ years of experience in monitoring, troubleshooting Proven experience with AWS ETL services such as Glue, Lambda, Step Functions Strong understanding of data warehousing concepts and data modeling principles. Experience with SQL and scripting languages like Python or Bash for data manipulation and automation. Experience with monitoring tools. Excellent communication, collaboration, leadership, and problem-solving skills. ? Implement secure and scalable data pipelines on AWS utilizing services like Glue, Lambda, Step Functions, and Kinesis. ? Should have worked on monitoring tools and support the developers with scalable solutions. Should be diligent and proactive on resolving on the fly issues and escalate to team as needed. ? Optimize data pipelines for performance, scalability, and cost-effectiveness. ? Handson knowledge on MS SQL, AWS Athena ? Implement data quality checks and monitoring to ensure data integrity throughout the ETL process. ? Stay up-to-date on the latest advancements in AWS data services and ETL best practices. ? Troubleshoot and resolve complex data pipeline issues. ? Willing to work in rotating shifts
Posted 1 week ago
5.0 - 10.0 years
30 - 35 Lacs
Bengaluru
Work from Office
Design, develop, and maintain end-to-end data pipelines on AWS, utilizing serverless architecture. Implement data ingestion, validation, transformation procedures using AWS services such as Lambda, Glue, Kinesis, SNS, SQS, and CloudFormation. Write orchestration tasks within Apache Airflow. Develop and execute data quality checks using Great Expectations to ensure data integrity and reliability. Collaborate with other teams to understand mission objectives and translate them into data pipeline requirements. Utilize PySpark for complex data processing tasks within AWS Glue jobs. Qualifications: Bachelors degree in Computer Science, Engineering, or related field. Strong proficiency in Python programming language. Hands-on experience with AWS services. Experience with serverless architecture and Infrastructure as Code (IaC) using AWS CDK. Proficiency in Apache Airflow for orchestration of data pipelines. Familiarity with data quality assurance techniques and tools, preferably Great Expectations. Experience with SQL for data manipulation and querying. Strong communication and collaboration skills, with the ability to work effectively in a team environment. Experience with Data Lakehouse, dbt, Apache Hudi data format is a plus. Mandatory Skills Python, AWS , Infrastructure as Code (IaC) using AWS CDK, Apache Airflow, SQL, Data Lakehouse, dbt, Apache Hudi VENDOR PROPOSED RATE (as per ECMS system) 12000 INR/day Work Location Anywhere in India
Posted 1 week ago
2.0 - 7.0 years
4 - 9 Lacs
Bengaluru
Work from Office
Willing to work from office (Monday to Friday) in US-EST shift (Night shift) and suitable candidate will be called to office for F2F interview. EXPERIENCE: 2+ Year in international outbound calling LANGUAGE (Fluent): English Language Role Description The Associate is a highly motivated, excellent communicator and personable professional that focuses on recruiting market research participants via phone and carry out cold calling to engage new respondents to complete M360 Research projects. The Associate provides high quality customer service to panel members and other research participants by answering questions and resolving enquiries regarding registration, accounts, study participation, compensation, and more. Duties and Responsibilities Project Recruitment: Recruit M360 Research panellists via phone and carry out cold calling to recruit respondents on allocated quantitative and qualitative projects Carry out online desk research of phone numbers of healthcare professionals and call them to recruit them to participate in market research studies Ensure project details are clearly communicated to participants when contacting them over the phone Inform the line manager of any recruitment issues, delays and foreseeable problems that can affect the successful delivery of the project Provide insightful and relevant feedback on projects feasibility based on gathered market intelligence upon talking to respondents over the phone Complete phone screening of recruited respondents for telephone and in-facility interviews Ensure qualitative respondents are scheduled and confirmed over the phone Ensure confirmation letters and consent forms are sent and complete follow up calls if needed to chase on materials Ensure that daily number of calls and strike rate targets are achieved Strictly follow phone quality communication parameters and guidelines As part of job responsibilities, you are required to comply with ISO 20252:2019 and ISO 27001 standards . Willing to work in US EST shift, the role requires you to support US. (Shift time 6:00 PM to 3:00 AM IST). Panel Engagement Call M360 Research registered respondents who have not confirmed their email account to complete the onboarding process Call inactive M360 Research respondents and invite them to reactivate their M360 account Panel Support: Provide high-quality professional support to M360 Research members via telephone and email / support ticket communications Master and work across multiple systems to investigate, troubleshoot and handle enquiries and complaints and provide appropriate solutions and alternatives Effectively work with our US and EU Operations / Project Management team members to resolve project specific issues and acting on behalf of the user while balancing user advocacy and company profitability Handle all enquiries according to company policy and expectations in regard to outcomes, time to resolution, and communication standards Complete the verification process for newly registered panelists and carry out data quality checks for existing panelists Build and maintain strong relationships with the panel members through timely and professional communication, quality problem solving and creative thinking Provide input and assist with updating communication procedures, guidelines, and enhancements to the system and processes Complete panel verifications and provide assistance on market research project set up Experience, skills and qualifications Fluent in English 1+ year of experience as a Tel caller with US Process or International Call Centre Excellent interpersonal and communication skills - both verbal and written Have a strong comfort level of being on the phone Be able to work as part of a team and show flexibility in the tasks they are asked to perform Independently motivated and inspired by working in a dynamic environment Comfortable with change, with the ability to seek opportunity in uncertain conditions Analytical and a strategic thinker Ensure 100% accuracy and demonstrate excellent attention to detail Strong organizational skills and the ability to multitask Comfortable to work in night shift (US EST time Zone) Willing work from Office. Visit our company website before the interview www.m360research.com Qualifications Any Graduate/Under-Graduate (minimum 12th pass)
Posted 1 week ago
5.0 - 10.0 years
35 - 40 Lacs
Mumbai
Work from Office
We have an opportunity to impact your career and provide an adventure where you can push the limits of whats possible. As a Data Platform Engineering Lead at JPMorgan Chase within Asset and Wealth Management, you are an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. As a core technical contributor, you are responsible for conducting critical technology solutions across multiple technical areas within various business functions in support of the firm s business objectives. Job responsibilities Lead the design, development, and implementation of scalable data pipelines and ETL batches using Python/PySpark on AWS. Execute standard software solutions, design, development, and technical troubleshooting Use infrastructure as code to build applications to orchestrate and monitor data pipelines, create and manage on-demand compute resources on cloud programmatically, create frameworks to ingest and distribute data at scale. Manage and mentor a team of data engineers, providing guidance and support to ensure successful product delivery and support. Collaborate proactively with stakeholders, users and technology teams to understand business/technical requirements and translate them into technical solutions. Optimize and maintain data infrastructure on cloud platform, ensuring scalability, reliability, and performance. Implement data governance and best practices to ensure data quality and compliance with organizational standards. Monitor and troubleshoot application and data pipelines, identifying and resolving issues in a timely manner. Stay up-to-date with emerging technologies and industry trends to drive innovation and continuous improvement. Add to team culture of diversity, equity, inclusion, and respect. Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 5+ years applied experience Experience in software development and data engineering, with demonstrable hands-on experience in Python and PySpark. Proven experience with cloud platforms such as AWS, Azure, or Google Cloud. Good understanding of data modeling, data architecture, ETL processes, and data warehousing concepts. Experience or good knowledge of cloud native ETL platforms like Snowflake and/or Databricks. Experience with big data technologies and services like AWS EMRs, Redshift, Lambda, S3. Proven experience with efficient Cloud DevOps practices and CI/CD tools like Jenkins/Gitlab, for data engineering platforms. Good knowledge of SQL and NoSQL databases, including performance tuning and optimization. Experience with declarative infra provisioning tools like Terraform, Ansible or CloudFormation. Strong analytical skills to troubleshoot issues and optimize data processes, working independently and collaboratively. Experience in leading and managing a team/pod of engineers, with a proven track record of successful project delivery. Preferred qualifications, capabilities, and skills Knowledge of machine learning model lifecycle, language models and cloud-native MLOps pipelines and frameworks is a plus. Familiarity with data visualization tools and data integration patterns.
Posted 1 week ago
4.0 - 9.0 years
6 - 10 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Pearson : At Pearson, we\u2019re committed to a world that always learning and to our talented team who makes it all possible From bringing lectures vividly to life to turning textbooks into laptop lessons, we are always re-examining the way people learn best, whether it one child in our own backyard or an education community across the globe We are bold thinkers and standout innovators who motivate each other to explore new frontiers in an environment that supports and inspires us to always be better Background Information : Shared Services, a captive unit, based out in Noida enablespositive changes inperformanceand stakeholderengagement througha centralized operatingmodel Shared services is a global function supporting Pearson Higher Education As a team, we manage a variety of data processes to ensure that data is valid, accurate and compliant with governed rules We also provide solutioning to business teams if they require changes in the database or in the functionality of any tool As content continues to proliferate across multiple emerging digitalplatforms, our team provides resources to enable scalability and costcontainment We also facilitatecollaboration betweenbusiness and technology who contribute to the products Role Description: We are seeking a detail-oriented and analytical professional to join our team in the role of Associate, Data Operations This role is responsible for ensuring the accuracy, consistency, and integrity of data across systems and workflows The individual will support data lifecycle management, execute operational processes, and collaborate with cross-functional teams to drive data quality, compliance, and timely delivery Key Responsibilities:Manage end-to-end data entry, updates, and maintenance across internal platforms and systems Monitor data quality, identify anomalies or discrepancies, and take corrective actions as needed Support the creation, tracking, and maintenance of item/product/master data or other key business datasets Partner with cross-functional teams to ensure timely and accurate data inputs aligned with business rules and timelines Document and optimize data operational processes to enhance efficiency and consistency Conduct routine audits and validation checks to ensure data compliance with internal standards and policies Assist in onboarding new tools or systems related to data operations, including testing and training Education, Qualifications & Functional CompetenciesBachelor degree in Business, Information Systems, Data Science, or related field 4years of experience in data operations, data management, or related roles Strong proficiency in ExcelExperience with data entry and governanceStrong attention to detail and a commitment to data accuracy Excellent organizational and communication skills Ability to work independently as well as part of a team in a fast-paced environment Core Behavioural Competencies:Essential:Ability to work collaboratively as a teamFlexible to adapt changes and a strong customer focusGood personnel management skills with ability to understand business processes and execute routine work Should have flexibility to work with international teams where there are multiple time zones to balanceConfident, enthusiastic, curious and result drivenDesired:Flexible to change and adapt new ways of workingShould be able to work with diverse stakeholders of varied cultural backgrounds1145110Job: Data EngineeringJob Family: TECHNOLOGY
Posted 1 week ago
3.0 - 7.0 years
5 - 9 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Job Summary: We are seeking a passionate and experienced Big Data Engineer with strong hands-on expertise in Scala, Apache Spark, Kafka, Spark Streaming, and SQL . You will be responsible for building and maintaining real-time and batch data processing pipelines that support our data-driven decision-making across business functions. Key Responsibilities: Design and implement scalable and high-performance data processing pipelines using Apache Spark (core and streaming). Develop clean and efficient Scala code to handle large-scale data transformations. Integrate Apache Kafka for real-time data ingestion and streaming workflows. Write optimized SQL queries for data transformation and analytics. Collaborate with data scientists, analysts, and other engineers to deliver robust data solutions. Ensure data quality, reliability, and consistency across all platforms. Optimize system performance and troubleshoot data-related issues.
Posted 1 week ago
0.0 - 5.0 years
2 - 7 Lacs
Pune
Work from Office
Its fun to work at a company where people truly believe in what they are doing! Job Description: Job Summary: The Operations Analyst role is to provide technical support for the full lifecycle of the electronic discovery reference model (EDRM) including ingestion of data, quality control, document production and document review projects. The position will require attention to detail, multi-tasking, analytical skills, as well as someone who works well in a team. The candidate must be able to work under the pressure of strict deadlines on multiple projects in a fast-paced environment. Essential Job Responsibilities Utilize proprietary and 3rd party eDiscovery software applications for electronic discovery and data recovery processes. Load, Process and Search client data in many different file formats. Conducting relevant searches of electronic data using proprietary tools. Work closely with team members to troubleshoot data issues (prior to escalation to operations senior management and/or IT/Development), research software and/or techniques to solve problems, and carry out complex data analysis tasks. Providing end user and technical documentation and training for applications supported. Communicate and collaborate with other company departments. Generate reports from various database platforms for senior management. Generating written status reports to clients, managers, and project managers. Working closely with internal departments on streamlining processes and development of proprietary tools Qualifications & Certifications Solid understanding of Windows and all MS Office applications is required. Basic UNIX skills, understanding of hardware, networking, and delimited files would be an advantage. Experience with database applications and knowledge of litigation support software is desirable. Strong analytical and problem-solving skills are essential for this role. Demonstrated ability to work in a team environment, follow detailed instructions and meet established deadlines. A self-starter with ability to visualize data and software behavior and coordinate the two. Fluency in English (verbal and written) is required. Bachelor s degree or final year student, preferably in computer/technical or legal field or equivalent combination of education and/or experience required. If you like wild growth and working with happy, enthusiastic over-achievers, youll enjoy your career with us!
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane