Jobs
Interviews

918 Dataflow Jobs - Page 32

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

🗨Exciting Opportunity for Data Engineers with 5+ years of Experience at Telus Digital!!!🗨 If you are looking for change and have the below skills, Please DM me, i would be happy to refer you. Skillset Required:- Python SQL BigQuery Composer/Airflow Dataflow CI/CD Show more Show less

Posted 1 month ago

Apply

1.0 - 5.0 years

3 - 7 Lacs

Chandigarh

Work from Office

Key Responsibilities Assist in building and maintaining data pipelines on GCP using services like BigQuery, Dataflow, Pub/Sub, Cloud Storage, etc. Support data ingestion, transformation, and storage processes for structured and unstructured datasets. Participate in performance tuning and optimization of existing data workflows. Collaborate with data analysts, engineers, and stakeholders to ensure reliable data delivery. Document code, processes, and architecture for reproducibility and future reference. Debug issues in data pipelines and contribute to their resolution.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Company Overview Viraaj HR Solutions is a leading provider of human resources services, dedicated to empowering businesses with top-notch talent acquisition and management solutions. We pride ourselves on our commitment to excellence and our innovative approach to meeting client needs. Our mission is to enhance organizational efficiency and productivity through strategic workforce planning. We value integrity, collaboration, and continuous improvement in everything we do. Job Title: GCP Data Engineer Work Mode: On-site Location: India Role Responsibilities Design and implement scalable data pipelines in Google Cloud Platform (GCP). Develop data models and architecture for data warehousing solutions. Create ETL (Extract, Transform, Load) processes to streamline data management. Optimize data flows and processes for efficiency and performance. Collaborate with data scientists and analysts to understand data requirements and design optimal solutions. Manage and monitor data ingestion from various sources. Conduct data quality checks and troubleshoot issues related to data integrity. Utilize BigQuery for data analysis and reporting tasks. Implement data security measures to ensure compliance with regulations. Document processes, data models, and reports for reference and training purposes. Stay current with emerging technologies and best practices in data engineering. Engage in code reviews, troubleshooting, and debugging of data solutions. Work closely with the development team to integrate data processes into applications. Train and support team members in GCP tools and data engineering principles. Prepare and present reports on data insights and performance metrics. Qualifications Bachelor's degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering roles. Strong experience with Google Cloud Platform, particularly BigQuery and Dataflow. Expertise in data modeling and database design. Proficient in Python and SQL programming. Hands-on experience with ETL tools and methodologies. Understanding of data warehousing concepts and architectures. Familiarity with cloud architecture and services. Excellent analytical and problem-solving skills. Strong communication and collaboration abilities. Experience with Agile methodologies is a plus. Ability to work independently and manage multiple tasks simultaneously. Relevant certifications in GCP or data engineering are advantageous. Experience with big data technologies such as Hadoop or Spark is a plus. Commitment to continuous learning and professional development. Skills: data security,gcp,data modeling,cloud architecture,etl,data engineering,big query,python,google cloud platform,dataflow,spark,bigquery,database design,sql,data warehousing,hadoop,agile methodologies Show more Show less

Posted 1 month ago

Apply

100.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About The Role Grade Level (for internal use): 10 S&P Global Mobility The Role: Senior Data Engineer Department Overview Automotive Insights at S&P Mobility, leverages technology and data science to provide unique insights, forecasts and advisory services spanning every major market and the entire automotive value chain—from product planning to marketing, sales and the aftermarket. We provide the most comprehensive data spanning the entire automotive lifecycle—past, present and future. With over 100 years of history, unmatched credentials, and the largest base of customers than any other provider, we are the industry benchmark for clients around the world, helping them make informed decisions to capitalize on opportunity and avoid risk. Our solutions are used by nearly every major OEM, 90% of the top 100 tier one suppliers, media agencies, governments, insurance companies, and financial stakeholders to provide actionable insights that enable better decisions and better results. Position Summary S&P Global is seeking an experienced and driven Senior data Engineer who is passionate about delivering high-value, high-impact solutions to the world’s most demanding, high-profile clients. The ideal candidate must have at least 5 years of experience in developing and deploying data pipelines on Google Cloud Platform (GCP). They should be passionate about building high-quality, reusable pipelines using cutting-edge technologies. This role involves designing, building, and maintaining scalable data pipelines, optimizing workflows, and ensuring data integrity across multiple systems. The candidate will collaborate with data scientists, analysts, and software engineers to develop robust and efficient data solutions. Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines. Optimize and automate data ingestion, transformation, and storage processes. Work with structured and unstructured data sources, ensuring data quality and consistency. Develop and maintain data models, warehouses, and databases. Collaborate with cross-functional teams to support data-driven decision-making. Ensure data security, privacy, and compliance with industry standards. Troubleshoot and resolve data-related issues in a timely manner. Monitor and improve system performance, reliability, and scalability. Stay up-to-date with emerging data technologies and recommend improvements to our data architecture and engineering practices. What You Will Need Strong programming skills using python. 5+ years of experience in data engineering, ETL development, or a related role. Proficiency in SQL and experience with relational (PostgreSQL, MySQL, etc.) and NoSQL (DynamoDB, MongoDB etc…) databases. Proficiency building data pipelines in Google cloud platform(GCP) using services like DataFlow, Cloud Batch, BigQuery, BigTable, Cloud functions, Cloud Workflows, Cloud Composer etc.. Strong understanding of data modeling, data warehousing, and data governance principles. Should be capable of mentoring junior data engineers and assisting them with technical challenges. Familiarity with orchestration tools like Apache Airflow. Familiarity with containerization and orchestration (Docker, Kubernetes). Experience with version control systems (Git) and CI/CD pipelines. Excellent problem-solving skills and ability to work in a fast-paced environment. Excellent communication skills. Hands-on experience with snowflake is a plus. Experience with big data technologies (Hadoop, Spark, Kafka, etc.) is a plus. Experience in AWS is a plus. Should be able to convert business queries into technical documentation. Education And Experience Bachelor’s degree in Computer Science, Information Systems, Information Technology, or a similar major or Certified Development Program 5+ years of experience building data pipelines using python & GCP (Google Cloud platform). About Company Statement S&P Global delivers essential intelligence that powers decision making. We provide the world’s leading organizations with the right data, connected technologies and expertise they need to move ahead. As part of our team, you’ll help solve complex challenges that equip businesses, governments and individuals with the knowledge to adapt to a changing economic landscape. S&P Global Mobility turns invaluable insights captured from automotive data to help our clients understand today’s market, reach more customers, and shape the future of automotive mobility. About S&P Global Mobility At S&P Global Mobility, we provide invaluable insights derived from unmatched automotive data, enabling our customers to anticipate change and make decisions with conviction. Our expertise helps them to optimize their businesses, reach the right consumers, and shape the future of mobility. We open the door to automotive innovation, revealing the buying patterns of today and helping customers plan for the emerging technologies of tomorrow. For more information, visit www.spglobal.com/mobility. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 314204 Posted On: 2025-05-07 Location: Gurgaon, Haryana, India Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Build the solution for optimal extraction, transformation, and loading of data from a wide variety of data sources using Azure data ingestion and transformation components. Following technology skills are required – Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience with ADF, Dataflow Experience with big data tools like Delta Lake, Azure Databricks Experience with Synapse Designing an Azure Data Solution skills Assemble large, complex data sets that meet functional / non-functional business requirements. Show more Show less

Posted 1 month ago

Apply

1.0 - 5.0 years

3 - 7 Lacs

Gurugram

Work from Office

Key Responsibilities Assist in building and maintaining data pipelines on GCP using services like BigQuery, Dataflow, Pub/Sub, Cloud Storage, etc. Support data ingestion, transformation, and storage processes for structured and unstructured datasets. Participate in performance tuning and optimization of existing data workflows. Collaborate with data analysts, engineers, and stakeholders to ensure reliable data delivery. Document code, processes, and architecture for reproducibility and future reference. Debug issues in data pipelines and contribute to their resolution.

Posted 1 month ago

Apply

10.0 - 15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Dear Candidate, Greetings from TCS !!! TCS is hiring for Application Solution Architect (GCP), please find the below JD….. About the Company TCS is a leading global IT services, consulting, and business solutions organization that delivers real results to global businesses, ensuring a level of certainty no other firm can match. About the Role The Application Solution Architect (GCP) will be responsible for leading cloud transformation programs and ensuring successful application and data migration to Google Cloud Platform. Responsibilities Google Cloud Architect Experience in large scale cloud transformation programs Hands on Experience in application & Data Migration to Cloud using Google migration tools Designing & Developing Google Cloud Based Systems using cloud native tools Expertise in GCP (GKE, Cloud Dataflow, Cloud Run and Cloud Functions, Cloud Build, BigQuery, IAM, VPC, Cloud DNS) Expertise In Storage and Databases - Utilizing GCP storage services such as Google Cloud Storage for object storage, Cloud SQL for relational databases, and BigQuery for data analytics GCP certified solution architect - Professional Qualifications Experience range – 10 to 15 years Location - PAN INDIA Required Skills Application Solution Architect Microservices .NET Architect Database Design Database modelling AWS Preferred Skills GCP certification Pay range and compensation package Details regarding pay range or salary will be discussed during the interview process. Equal Opportunity Statement TCS is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Senior Data Engineer Location: Chennai (Work From Office – Monday to Friday) Relevant exp:- 5 Years in Data Engineer Shift Timing: 1:00 PM – 10:00 PM CTC: Up to ₹15 LPA Notice Period: Immediate to 15 Days Key Responsibilities: Design and develop secure, scalable, and high-performance data pipelines and data models Lead the end-to-end ETL/ELT lifecycle, including implementation of Slowly Changing Dimension (SCD) Type 2 Collaborate with data scientists, analysts, and engineering teams to define and deliver on data requirements Maintain and optimize cloud-based data infrastructure (AWS, GCP, or Azure) Design and implement logical and physical data models using tools like Erwin or MySQL Workbench Promote and implement data governance practices, including data quality checks, lineage tracking, and catalog management Ensure compliance with organizational policies and data privacy regulations Mentor junior engineers, perform code reviews, and promote engineering best practices Required Qualifications & Skills: Bachelor’s or master’s degree in computer science, Engineering, or a related field Minimum of 5 years of experience in a Data Engineering or similar role Strong proficiency in SQL and Python (or equivalent languages like Scala/Java) Experience with data orchestration tools such as Apache Airflow, dbt, etc. Hands-on expertise in cloud platforms: AWS (Redshift, S3, Glue), GCP (BigQuery, Dataflow), or Azure (Data Factory, Synapse) Familiarity with big data technologies such as Apache Spark, Kafka, Hive Solid understanding of data warehousing, data modeling, and performance tuning Strong analytical, problem-solving, and team collaboration skills Show more Show less

Posted 1 month ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview This role is responsible for Snacks Actuals reporting & forecasting for the UK Foods business Responsibilities This role is responsible for Snacks Actuals reporting & forecasting for the UK Foods business including Reporting Actual performance results Reconciling actuals b/w SAP & TM1 Preparing and presenting Vol, GR & D&A variance analysis to senior stakeholders Present performance narrative with meaningful commentary Forecasting of snacks performance Coordinate with business partners to finalize forecast stream and overlays Prepare and present NR Cause of Change Present performance narrative with meaningful commentary Operate tasks & reserves in TM1 Planning process Coordinate and work with Supply chain, pricing teams to firm up AOP iterations Enable TM1 submissions to sector Identify and support continuous improvements including simplifications, process or control remediations. Management of Dashboaring tools Power-BI toolkit maintenance User access management Creation of views with business insights Linkage streaming and management of dataflow into Tablaue dashboard (Cockpit) Project / Analytics support Provide Topline commercial analytics support Contribute to ad-hoc analysis to draw impactful business insights Qualifications CA with 6 years of experience MBA (F) 7-8 years exp Digitally adept - SAP / TM1 Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Bangalore / Chennai Hands-on data modelling for OLTP and OLAP systems In-depth knowledge of Conceptual, Logical and Physical data modelling Strong understanding of Indexing, partitioning, data sharding, with practical experience of having done the same Strong understanding of variables impacting database performance for near-real-time reporting and application interaction. Should have working experience on at least one data modelling tool, preferably DBSchema, Erwin Good understanding of GCP databases like AlloyDB, CloudSQL, and BigQuery. People with functional knowledge of the mutual fund industry will be a plus Role & Responsibilities Work with business users and other stakeholders to understand business processes. Ability to design and implement Dimensional and Fact tables Identify and implement data transformation/cleansing requirements Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical data definitions Design, develop and maintain ETL workflows and mappings using the appropriate data load technique Provide research, high-level design, and estimates for data transformation and data integration from source applications to end-user BI solutions. Provide production support of ETL processes to ensure timely completion and availability of data in the data warehouse for reporting use. Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate, design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and data quality. Work collaboratively with key stakeholders to translate business information needs into well-defined data requirements to implement the BI solutions. Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into reporting & analytics. Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions. Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers, quality issues, and continuously validate reports, dashboards and suggest improvements. Train business end-users, IT analysts, and developers. Required Skills Bachelor’s degree in Computer Science or similar field or equivalent work experience. 5+ years of experience on Data Warehousing, Data Engineering or Data Integration projects. Expert with data warehousing concepts, strategies, and tools. Strong SQL background. Strong knowledge of relational databases like SQL Server, PostgreSQL, MySQL. Strong experience in GCP & Google BigQuery, Cloud SQL, Composer (Airflow), Dataflow, Dataproc, Cloud Function and GCS Good to have knowledge on SQL Server Reporting Services (SSRS), and SQL Server Integration Services (SSIS). Knowledge of AWS and Azure Cloud is a plus. Experience in Informatica Power exchange for Mainframe, Salesforce, and other new-age data sources. Experience in integration using APIs, XML, JSONs etc. Skills:- Data modeling, OLAP, OLTP, bigquery and Google Cloud Platform (GCP) Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: We’re seeking a hands-on Platform Engineer to support our enterprise data integration and enablement platform. As a Platform Engineer II, you’ll be responsible for designing, maintaining, and optimizing secure and scalable data movement services—such as batch processing, file transfers, and data orchestration. This role is essential to ensuring reliable data flow across systems to power analytics, reporting, and platform services in a cloud-native environment. Who we’re looking for: Primary Responsibilities: Hands-On Data Integration Engineering Build and maintain data transfer pipelines, file ingestion processes, and batch workflows for internal and external data sources. Configure and manage platform components that enable secure, auditable, and resilient data movement. Automate routine data processing tasks to improve reliability and reduce manual intervention. Platform Operations & Monitoring Monitor platform services for performance, availability, and failures; respond quickly to disruptions. Tune system parameters and job schedules to improve throughput and processing efficiency. Implement logging, metrics, and alerting to ensure end-to-end observability of data workflows. Security, Compliance & Support Apply secure protocols and encryption standards to data transfer processes (e.g., SFTP, HTTPS, GCS/AWS). Support compliance with internal controls and external regulations (e.g., GDPR, SOC2, PCI). Collaborate with security and infrastructure teams to manage access controls, service patches, and incident response. Troubleshooting & Documentation Investigate and resolve issues related to data processing failures, delays, or quality anomalies. Document system workflows, configurations, and troubleshooting runbooks for team use. Provide support for platform users and participate in on-call rotations as needed. Skill: 3+ years of hands-on experience in data integration , platform engineering , or infrastructure operations . Proficiency in: Designing and supporting batch and file-based data transfers Python scripting and SQL for diagnostics, data movement, and automation Terraform scripting and deploying of infrastructure cloud services Working with GCP (preferred) or AWS data analytics services, such as: GCP: Cloud Storage, BigQuery, Cloud Composer, Pub/Sub, Dataflow AWS: S3, Glue, Redshift, Athena, Lambda, EventBridge, Step Functions Cloud-native storage and compute optimization for data movement and processing Infrastructure-as-code and CI/CD practices (e.g., Terraform, Ansible, Cloud Build, GitHub Actions) Strong analytical and debugging skills for troubleshooting issues in distributed, high-volume environments. Bachelor's degree in computer science, Information Systems, or a related technical field. Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Additional Information: McDonald’s is committed to providing qualified individuals with disabilities with reasonable accommodations to perform the essential functions of their jobs. McDonald’s provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to sex, sex stereotyping, pregnancy (including pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), race, color, religion, ancestry or national origin, age, disability status, medical condition, marital status, sexual orientation, gender, gender identity, gender expression, transgender status, protected military or veteran status, citizenship status, genetic information, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. McDonald’s Capability Center India Private Limited (“McDonald’s in India”) is a proud equal opportunity employer and is committed to hiring a diverse workforce and sustaining an inclusive culture. At McDonald’s in India, employment decisions are based on merit, job requirements, and business needs, and all qualified candidates are considered for employment. McDonald’s in India does not discriminate based on race, religion, color, age, gender, marital status, nationality, ethnic origin, sexual orientation, political affiliation, veteran status, disability status, medical history, parental status, genetic information, or any other basis protected under state or local laws. Nothing in this job posting or description should be construed as an offer or guarantee of employment. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Design, develop, and operate high scale applications across the full engineering stack Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Manage sole project priorities, deadlines, and deliverables. Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary : A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design and implement scalable, efficient, and secure data pipelines on GCP, utilizing tools such as BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud Storage. Collaborate with cross-functional teams (data scientists, analysts, and software engineers) to understand business requirements and deliver actionable data solutions. Develop and maintain ETL/ELT processes to ingest, transform, and load data from various sources into GCP-based data warehouses. Build and manage data lakes and data marts on GCP to support analytics and business intelligence initiatives. Implement automated data quality checks, monitoring, and alerting systems to ensure data integrity. Optimize and tune performance for large-scale data processing jobs in BigQuery, Dataflow, and other GCP tools. Create and maintain data pipelines to collect, clean, and transform data for analytics and machine learning purposes. Ensure data governance and compliance with organizational policies, including data security, privacy, and access controls. Stay up to date with new GCP services and features and make recommendations for improvements and new implementations. Mandatory Skill Sets GCP, Big query , Data Proc Preferred Skill Sets GCP, Big query , Data Proc, Airflow Years Of Experience Required 3-7 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Master of Business Administration, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Good Clinical Practice (GCP) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 18 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 month ago

Apply

9.0 years

0 Lacs

Bengaluru, Karnataka

Remote

Job Title: Google Cloud (GCP) Data Engineer Location: Hybrid (Bengaluru) Job Type: Full-Time Experience Level: Minimum 9 years + Joining: Immediate / 1 week Client: HSBC Mandatory Skills: GCS + Google BigQuery + Airflow/Composer + Python Company Description: Tech T7 Innovations is a company that provides IT solutions to clients worldwide. The team consists of highly skilled and experienced professionals who are passionate about IT. Tech T7 Innovations offers a wide range of IT services, including software development, web design, cloud computing, cybersecurity, data engineering, data science and machine learning. The company is committed to staying up-to-date with the latest technologies and best practices to deliver the best solutions to their clients. Job Summary: We are looking for a highly experienced GCP Data Engineer with 11+ years in data engineering, and a proven track record of designing, building, and optimizing scalable data pipelines and architectures on Google Cloud Platform (GCP) . The ideal candidate is hands-on with Google Cloud Storage (GCS) , BigQuery (BQ) , Apache Airflow , and Python , and is adept at managing complex data workflows and transformations at scale. Key Responsibilities: Design and implement highly scalable, reliable, and secure data pipelines on GCP using GCS, BigQuery, and Airflow. Develop robust ETL/ELT processes using Python and integrate with data orchestration tools (e.g., Airflow). Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and deliver high-quality solutions. Optimize BigQuery performance through efficient schema design, partitioning, clustering, and query optimization. Manage and maintain data lake and data warehouse environments ensuring data integrity and availability. Automate and monitor data pipelines to ensure consistent and reliable data delivery. Contribute to architecture decisions and ensure adherence to data governance and security standards. Mentor junior engineers and promote best practices in data engineering and cloud usage. Must-Have Skills: Google Cloud Platform (GCP): In-depth experience with core services like GCS, BigQuery, IAM, and Cloud Functions. BigQuery (BQ): Expertise in data modeling, performance tuning, and large-scale analytics. Google Cloud Storage (GCS): Strong understanding of data storage, access patterns, and integration. Apache Airflow: Experience in authoring and managing DAGs for complex workflows. Python: Proficient in scripting and automation, including working with APIs, data processing libraries (e.g., pandas, PySpark), and custom operators in Airflow. Preferred Qualifications: Experience with CI/CD pipelines for data workflows. Exposure to Dataflow, Pub/Sub, or other GCP data services. Familiarity with Terraform or Infrastructure as Code (IaC) on GCP. Strong problem-solving and communication skills. GCP certifications (e.g., Professional Data Engineer) are a plus. Job Types: Full-time, Permanent Pay: ₹2,000,000.00 - ₹3,000,000.00 per year Benefits: Health insurance Provident Fund Work from home Schedule: Day shift Supplemental Pay: Performance bonus Experience: gcp: 9 years (Required) gcs: 9 years (Required) Apache Airflow: 9 years (Required) Python: 9 years (Preferred) Location: Bengaluru, Karnataka (Required) Work Location: In person

Posted 1 month ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About The Job Developers at Vendasta work in teams, working with Product Managers and Designers in the creation of new features and products. Our Research and Development department works hard to help developers learn, grow, and experiment while at work. With a group of over 100 developers, we have fostered an environment that provides developers with the opportunity to continuously learn from each other. The ideal candidate will demonstrate that they are bright and can tackle tough problems while being able to communicate their solution to others. They are creative and can mix technology with the customer's problems to find the right solution. Lastly, they are driven and will motivate themselves and others to get things done. As an experienced Software Developer, we expect that you will grow into a thought leader at Vendasta, driving better results across our development organization. Your Impact Develop software in teams of 3-5 developers, with the ability to take on tasks for the team and independently work on them to completion. Follow best practices to write clean, maintainable, scalable, and tested software. Contribute to the best engineering practices, including the use of design patterns, CI/CD, maintainable and scalable code, code review, and automated tests. Provide inputs for a technical roadmap for the Product Area. Ensure that the NFRs and technical debt get their due focus. Work collaboratively with Product Managers to design solutions (including technical roadmap) that help our Partners connect digital solutions to small and medium-sized businesses. Analyzing and improving current system integrations and migration strategies. Interact and collaborate with our high-quality technical team across India and Canada What You Bring to the Table: 8+ years experience in a related field with at least 3+ years as full stack developer in an architect or senior development role Experience or strong understanding of high scalability, data-intensive, distributed Internet applications Software development experience including building distributed, microservice-style and cloud-based application architectures Proficiency in modern software language, and willingness to quickly learn our technology stack Preference will be given to candidates with a strong Go (programming language) experience, and who can demonstrate the ability to build and adapt web applications using Angular. Experience in designing, Building and Implementing cloud-native architectures (GCP preferred). Experience working with the Scrum framework Technologies We Use Cloud Native Computing using Google Cloud Platform BigQuery, Cloud Dataflow, Cloud Pub/Sub, Google Data Studio, Cloud IAM, Cloud Storage, Cloud SQL, Cloud Spanner, Cloud Datastore, Google Maps Platform, Stackdriver, etc.. We have been invited to join the Early Access Program on quite a few GCP technologies. GoLang, Typescript, Python, JavaScript, HTML, Angular, GRPC, Kubernetes Elasticsearch, MySQL, PostgreSQL About Vendasta : So what do we do? We create an entire platform full of digital products & solutions that help small to medium-sized businesses (SMBs) have a stronger presence online through digital advertising, online listings, reputation management, website creation, social media marketing . and much more! Our platform is used exclusively by channel partners, who sell products and services to SMBs, allowing them to leverage us to scale and grow their business. We are trusted by 65,000+ channel partners, serving over 6 million SMBs worldwide! Perks Stock options (as per policy) Benefits - Health insurance Paid time offs Public transport reimbursement Flex days Training & Career Development - Professional development plans, leadership workshops, mentorship programs, and more! Free Snacks, hot beverages, and catered lunches on Fridays Culture - comprised of our core values: Drive, Innovation, Respect, and Agility Provident Fund (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together Primary Responsibility Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Engineering degree or equivalent experience 3+ years of ETL Experience using SQL Server Integration Services (SSIS) within Visual Studio 3+ years of advanced SQL Server experience. Certification in Microsoft SQL Server a plus Experience using GitHub platform for version control Experience using GitHub Copilot AI-powered code assistant Experience developing within an agile (i.e. Scrum or Kanban) framework Healthcare experience API Experience Proficiency in API Design and Architecture Understanding of RESTful Principles and GraphQL Expertise in Programming Languages (e.g. JavaScript, Python, Java) Knowledge of API Security Best Practices Proficiency with API Documentation Tools (e.g. Swagger/OpenAPI) Google Cloud Platform Experience Utilizing Google Cloud Dataflow and Google Cloud Dataproc with SQL or Python in Jupyter Notebooks to load data into BigQuery and Google Cloud Storage Implementing data processing jobs and managing data within BigQuery Creating dashboards and visualizations for business users using Google Data Studio and Looker Utilizing Google AI Platform to build and deploy machine learning models Expertise in Cloud Data Migration, Security and Administration Migrating SQL Server databases from on-premises to Google Cloud SQL, Google Cloud Spanner, and/or SQL Server on Google Compute Managing database migration projects from on-premises to Google Cloud BI report development Solid soft skills (e.g. communication, interpersonal, collaborative, resilient) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Overview Leading AI-driven Global Supply Chain Solutions Software Product Company and one of Glassdoor’s “Best Places to Work” Seeking an astute individual that has a strong technical foundation with the additional ability to be hands-on with the broader engineering team as part of the development/deployment cycle, and deep knowledge of industry best practices, Data Science and Machine Learning experience with the ability to implement them working with both the platform, and the product teams. Scope Our machine learning platform ingests data in real time, processes information from millions of retail items to serve deep learning models and produces billions of predictions on a daily basis. Blue Yonder Data Science and Machine Learning team works closely with sales, product and engineering teams to design and implement the next generation of retail solutions. Data Science team members are tasked with turning both small, sparse and massive data into actionable insights with measurable improvements to the customer bottom line. Our Current Technical Environment Software: Python 3.* Frameworks/Others: TensorFlow, PyTorch, BigQuery/Snowflake, Apache Beam, Kubeflow, Apache Flink/Dataflow, Kubernetes, Kafka, Pub/Sub, TFX, Apache Spark, and Flask. Application Architecture: Scalable, Resilient, Reactive, event driven, secure multi-tenant Microservices architecture. Cloud: Azure What We Are Looking For Bachelor’s Degree in Computer Science or related fields; graduate degree preferred. Solid understanding of data science and deep learning foundations. Proficient in Python programming with a solid understanding of data structures. Experience working with most of the following frameworks and libraries: Pandas, NumPy, Keras, TensorFlow, Jupyter, Matplotlib etc. Expertise in any database query language, SQL preferred. Familiarity with Big Data tech such as Snowflake , Apache Beam/Spark/Flink, and Databricks. etc. Solid experience with any of the major cloud platforms, preferably Azure and/or GCP (Google Cloud Platform). Reasonable knowledge of modern software development tools, and respective best practices, such as Git, Jenkins, Docker, Jira, etc. Familiarity with deep learning, NLP, reinforcement learning, combinatorial optimization etc. Provable experience guiding junior data scientists in official or unofficial setting. Desired knowledge of Kafka, Redis, Cassandra, etc. What You Will Do As a Senior Data Scientist, you serve as a specialist in the team that supports the team with following responsibilities. Independently, or alongside junior scientists, implement machine learning models by Procuring data from platform, client, and public data sources. Implementing data enrichment and cleansing routines Implementing features, preparing modelling data sets, feature selection, etc. Evaluating candidate models, selecting, and reporting on test performance of final one Ensuring proper runtime deployment of models, and Implementing runtime monitoring of model inputs and performance in order to ensure continued model stability. Work with product, sales and engineering teams helping shape up the final solution. Use data to understand patterns, come up with and test hypothesis; iterate. Help prepare sales materials, estimate hardware requirements, etc. Attend client meetings, online and onsite, to discuss new and current functionality Our Values If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success – and the success of our customers. Does your heart beat like ours? Find out here: Core Values All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Total Expereince: 5 to8 years Location- Pune & Hyderabad Notice Period- Immediate to 30 days Must have: · 5+ years of experience in data engineering technology and tools. · Preferred having experience with Java / Scala based implementations for enterprise-wide platforms. · Experience with Apache Beam, Google Dataflow, Apache Kafka for real-time steam processing technology stack. · Complex state-full processing of events with partitioning for higher throughputs. · Have dealt with fine-tuning the through-puts and improving the performance aspects on data pipelines. · Experience with analytical data store optimizations, querying and managing them. · Experience with alternate data engineering tools (Apache Flink, Apache Spark etc) · Automated CI/CD or operations concerns on the engineering platforms. · Interpreting problems from functional context and transforming them into technology solutions. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Delhi, India

On-site

Total Expereince: 5 to8 years Location- Pune & Hyderabad Notice Period- Immediate to 30 days Must have: · 5+ years of experience in data engineering technology and tools. · Preferred having experience with Java / Scala based implementations for enterprise-wide platforms. · Experience with Apache Beam, Google Dataflow, Apache Kafka for real-time steam processing technology stack. · Complex state-full processing of events with partitioning for higher throughputs. · Have dealt with fine-tuning the through-puts and improving the performance aspects on data pipelines. · Experience with analytical data store optimizations, querying and managing them. · Experience with alternate data engineering tools (Apache Flink, Apache Spark etc) · Automated CI/CD or operations concerns on the engineering platforms. · Interpreting problems from functional context and transforming them into technology solutions. Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are looking for an experienced Integration Technical Lead with over 10 years of in-depth experience in Oracle Fusion Middleware technologies such as SOA Suite, Oracle Service Bus (OSB), and Oracle Data Integrator (ODI). Candidate will be responsible for leading integration initiatives including custom development, platform customization, and day-to-day operational support. A strong interest in Google Cloud Platform (GCP) is highly desirable, with clear opportunities for training and skill development. ShyftLabs is a growing data product company founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries by focusing on creating value through innovation. Job Responsibilities: 1. Integration Leadership & Development: Lead end-to-end integration design and development across on-premise and cloud systems using Oracle SOA, OSB, and ODI Drive new integration projects, from requirements gathering through to deployment and support Develop, customize, and maintain reusable integration components and templates Translate complex business processes into scalable, secure, and performant integration solutions Platform Customization & Optimization: Customize Oracle Fusion middleware components to meet specific business needs and performance objectives Evaluate existing integrations and enhance them for greater efficiency and lower latency Implement best practices in integration design, error handling, and performance tuning Operational Excellence & Support: Own the operational stability of integration platforms including monitoring, incident resolution, and root cause analysis Manage daily operations such as deployments, patches, backups, and performance reviews Collaborate with IT support teams to maintain integration SLAs, uptime and reliability Cloud Integration & GCP Adoption: Contribute to the design of hybrid and cloud-native integration architectures using GCP Learn and eventually implement integration patterns using tools like Apigee, Pub/Sub, Cloud Functions, and Dataflow Participate in GCP migration initiative for legacy integration assets Basic Qualifications: 10+ years of hands-on experience with Oracle SOA Suite, OSB, and ODI in enterprise environments Expertise in web services (REST/SOAP), XML, XSD, XSLT, XPath, and service orchestration Strong skills in platform customization, new integration development, integration monitoring, alerting, and troubleshooting processes and long-term system maintenance Experience with performance optimization, fault tolerance, and secure integrations Excellent communication and team leadership skills Preferred Qualifications: Exposure to Google Cloud Platform (GCP) or strong interest and ability to learn Familiarity with GCP services for integration (Pub/Sub, Cloud Store/Functions) Understanding of containerized deployments using Docker and Kubernetes Experience with DevOps tools and CI/CD pipelines for integration delivery. We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Total Expereince: 5 to8 years Location- Pune & Hyderabad Notice Period- Immediate to 30 days Must have: · 5+ years of experience in data engineering technology and tools. · Preferred having experience with Java / Scala based implementations for enterprise-wide platforms. · Experience with Apache Beam, Google Dataflow, Apache Kafka for real-time steam processing technology stack. · Complex state-full processing of events with partitioning for higher throughputs. · Have dealt with fine-tuning the through-puts and improving the performance aspects on data pipelines. · Experience with analytical data store optimizations, querying and managing them. · Experience with alternate data engineering tools (Apache Flink, Apache Spark etc) · Automated CI/CD or operations concerns on the engineering platforms. · Interpreting problems from functional context and transforming them into technology solutions. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Total Expereince: 5 to8 years Location- Pune & Hyderabad Notice Period- Immediate to 30 days Must have: · 5+ years of experience in data engineering technology and tools. · Preferred having experience with Java / Scala based implementations for enterprise-wide platforms. · Experience with Apache Beam, Google Dataflow, Apache Kafka for real-time steam processing technology stack. · Complex state-full processing of events with partitioning for higher throughputs. · Have dealt with fine-tuning the through-puts and improving the performance aspects on data pipelines. · Experience with analytical data store optimizations, querying and managing them. · Experience with alternate data engineering tools (Apache Flink, Apache Spark etc) · Automated CI/CD or operations concerns on the engineering platforms. · Interpreting problems from functional context and transforming them into technology solutions. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Total Expereince: 5 to8 years Location- Pune & Hyderabad Notice Period- Immediate to 30 days Must have: · 5+ years of experience in data engineering technology and tools. · Preferred having experience with Java / Scala based implementations for enterprise-wide platforms. · Experience with Apache Beam, Google Dataflow, Apache Kafka for real-time steam processing technology stack. · Complex state-full processing of events with partitioning for higher throughputs. · Have dealt with fine-tuning the through-puts and improving the performance aspects on data pipelines. · Experience with analytical data store optimizations, querying and managing them. · Experience with alternate data engineering tools (Apache Flink, Apache Spark etc) · Automated CI/CD or operations concerns on the engineering platforms. · Interpreting problems from functional context and transforming them into technology solutions. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Total Expereince: 5 to8 years Location- Pune & Hyderabad Notice Period- Immediate to 30 days Must have: · 5+ years of experience in data engineering technology and tools. · Preferred having experience with Java / Scala based implementations for enterprise-wide platforms. · Experience with Apache Beam, Google Dataflow, Apache Kafka for real-time steam processing technology stack. · Complex state-full processing of events with partitioning for higher throughputs. · Have dealt with fine-tuning the through-puts and improving the performance aspects on data pipelines. · Experience with analytical data store optimizations, querying and managing them. · Experience with alternate data engineering tools (Apache Flink, Apache Spark etc) · Automated CI/CD or operations concerns on the engineering platforms. · Interpreting problems from functional context and transforming them into technology solutions. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Total Expereince: 5 to8 years Location- Pune & Hyderabad Notice Period- Immediate to 30 days Must have: · 5+ years of experience in data engineering technology and tools. · Preferred having experience with Java / Scala based implementations for enterprise-wide platforms. · Experience with Apache Beam, Google Dataflow, Apache Kafka for real-time steam processing technology stack. · Complex state-full processing of events with partitioning for higher throughputs. · Have dealt with fine-tuning the through-puts and improving the performance aspects on data pipelines. · Experience with analytical data store optimizations, querying and managing them. · Experience with alternate data engineering tools (Apache Flink, Apache Spark etc) · Automated CI/CD or operations concerns on the engineering platforms. · Interpreting problems from functional context and transforming them into technology solutions. Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies