Home
Jobs

2873 Airflow Jobs - Page 46

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: β‚Ή0
Max: β‚Ή10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

🚨 Exciting Opportunity: Data Engineer Contract (1 Year) 🌟 Join a leading UK-based offshore product company as a Data Engineer in Hyderabad! If you thrive on cutting-edge data infrastructure and scalable pipelines using Google Cloud Platform (GCP), this 1 year contract role is perfect for you. **About the Role:** - Position: Data Engineer - Type: Contract (1 Year) - Experience : 4 -13 Years -Notice Period : Immediate - 30 Days - Location: Hyderabad - Interview: Face-to-Face on June 7,June 11,June 14 **Key Skills Required:** - Python -GCP - SQL - Google BigQuery **Responsibilities:** - Build and maintain scalable data pipelines using Dataflow and Cloud Composer. - Ingest and transform data into BigQuery from various sources. - Optimize complex SQL queries for reporting and analytics. - Automate workflows and orchestrate pipelines with Cloud Composer (Airflow). - Collaborate with cross-functional teams to meet data requirements. - Ensure end-to-end data quality, consistency, and availability. - Monitor, troubleshoot, and resolve data pipeline issues. - Contribute to documentation, code reviews, and team knowledge sharing. **Preferred Qualifications:** - GCP Certification (Professional Data Engineer or similar) is a plus. - Experience with DevOps practices for data pipelines. - Exposure to real-time data streaming tools is an advantage. Ready to take your career to new heights with a global product firm? Don't miss out on this opportunity! Apply now !!! πŸš€ #DataEngineer #GCP #Hyderabad #JobOpening Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Responsibilities Design, develop, and maintain data pipelines on GCP. Implement data storage solutions and optimize data processing workflows. Ensure data quality and integrity throughout the data lifecycle. Collaborate with data scientists and analysts to understand data requirements. Monitor and maintain the health of the data infrastructure. Troubleshoot and resolve data-related issues. Stay updated with the latest GCP features and best practices. Qualifications Bachelor's degree in Computer Science, Engineering, or related field. Proven experience as a Data Engineer with expertise in GCP. Strong understanding of data warehousing concepts and ETL processes. Experience with Python, SQL, Hive, Airflow and other GCP data services. Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Skills Google Cloud Platform (GCP) BigQuery Dataflow SQL Python ETL processes Data warehousing Data modeling Hive Airflow Interview Mode - Virtual Date - 5th June 2025 Interested candidates share your resume to soundaryaa.k@hcltech.com .

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Scala, Java, spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components, , SQL,PostgreSQL , t-sql/pl-sql, Hadoop ( airflow, oozie, hdfs, Sqoop, Hive, Pig, Map Reduce),Shell Scripting, Cloud technologies GCP preferable Mandatory Skill Sets Scala, Spark, GCP Preferred Skill Sets Scala, Spark, GCP Years Of Experience Required 4 - 8 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Good Clinical Practice (GCP) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis, Intellectual Curiosity, Java (Programming Language), Market Development {+ 7 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About the company: About Titan At Titan, we’re redefining email for entrepreneurs, innovators, and creatorsβ€”transforming it into a powerful tool for business growth. Built by a team that deeply cares about helping businesses succeed, Titan is more than just an email platform. Founded by Bhavin Turakhia, who also founded Directi, Radix, and Zeta, with a combined enterprise value exceeding $2 billionβ€”Titan is backed by a strong legacy of innovation. Today, Titan powers millions of conversations, with 2.4 million emails sent and received every week. In 2021, Automattic (the parent company of WordPress, Your Way ) invested $30M in Titan, valuing the company at $300M. This partnership fuels our mission to revolutionize email and build the future of digital communication. At Titan, you’ll be part of a fast-growing business, solving meaningful problems and shaping a product that empowers millions. Join us to make a real impact. About Neo Neo is our fast-growing direct-to-customer platform designed to help small businesses, solopreneurs, and professionals establish a professional online presence with ease. Our offering includes domain name registration, an AI-powered website builder, and professional emailβ€”packaged at an affordable monthly rate to ensure accessibility for businesses of all sizes. We are now poised for our next phase of growth and are seeking a Growth Marketing Lead to accelerate new customer acquisition. About the Role: Join a high-performing team of data scientists and cross-functional partners to uncover insights, drive product strategy, and shape the future of our products. You’ll work across the business to identify key opportunities, optimize campaigns, and inform go-to-market and product decisions with data at the core. Roles and responsibilities: Work with both large and small datasets to solve a variety of business problems using rigorous analytical and statistical methods. Apply technical expertise with managing data infrastructure, quantitative analysis, experimentation, dashboard building and data storytelling to develop actionable strategies and influence product and business decisions. Identify and measure success of product efforts through goal setting, forecasting, and monitoring of key product metrics, and initiatives to understand trends and performance. Make sound, data-informed recommendations, even when data is sparse, through strong judgment and a structured approach to uncertainty. Partner with Product, Engineering, and cross-functional teams to inform, influence, support, and execute product strategy and investment decisions. Skills and qualifications: Bachelor’s or Master’s degree in a quantitative field such as Mathematics, Statistics, Computer Science, Engineering, or Economics et al. 3+ years of experience in data science, analytics, or related roles. Proficient in SQL and scripting languages such as Python. Comfortable working with imperfect or incomplete data, and able to apply appropriate methodologies to extract insights. Experience with data visualization tools (e.g., Metabase, Tableau, Power BI, QuickSight) Familiarity with machine learning techniques (e.g., regression, decision trees) is a bonus. Nice to have: experience working with AWS Cloud, Apache AirFlow. Perks and Benefits: We at Titan love our jobs. And it’s no surprise – we get to work on exciting and new projects, in a vibrant atmosphere that is designed to be comfortable and conducive for our personal and professional growth. And Titan goes the extra mile to make you feel at home. We offer benefits ranging from affordable catered meals and even snacks on the house. Our workspaces are welcoming and fun, complete with bean bag chairs and ping pong tables. You are free to wear what makes you comfortable and choose the hours you keep as a team. Oh, and we’ve got your family covered too, with excellent health insurance plans and other benefits. In short, everything you need to be your best self at work! If you like the idea of working on solutions that have a truly global impact, get in touch! Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chandigarh, India

On-site

Linkedin logo

About the Role - We are looking for enthusiastic graduates or early-career professionals to join our dynamic team as an Application Support Engineer. This is an exciting opportunity to gain hands-on experience in financial risk management, market data processing, and application support, while working closely with cross functional teams across the UK, Netherlands, and India. Key Responsibilities - β€’ Manage and monitor daily valuations for client derivatives portfolios. β€’ Ensure accurate and timely delivery of start-of-day positions by coordinating with the Chandigarh-based IT Production Support team. β€’ Perform quality checks on market data and valuations, including resolving discrepancies. β€’ Collaborate with the Operations, Engineering, and Product teams across geographies. β€’ Contribute to the optimization and automation of existing processes and propose continuous improvements. β€’ Develop familiarity with financial products such as swaptions, FX, bonds, indices, and portfolios. What We’re Looking For - β€’ 0–3 years of professional experience, ideally in financial services or IT support. β€’ Bachelor’s degree (or higher) in Finance, Engineering, Mathematics, or related field. β€’ Strong sense of responsibility, attention to detail, and analytical mindset. β€’ Proficiency in English communication (verbal and written). β€’ Exposure to or interest in: Financial Risk Management, Trading, Bonds, Indices, Swaptions, FX, portfolios and Valuation. SQL, Data Engineering tools (e.g., Hive, Hadoop, NiFi, Airflow) Linux command line and Git workflows Python (preferred or willing to learn) Power BI (preferred or willing to learn Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

About Us We are a company where the β€˜HOW’ of building software is just as important as the β€˜WHAT.’ We partner with large organizations to modernize legacy codebases and collaborate with startups to launch MVPs, scale, or act as extensions of their teams. Guided by Software Craftsmanship values and eXtreme Programming Practices , we deliver high-quality, reliable software solutions tailored to our clients' needs. We thrive to: Bring our clients' dreams to life by being their trusted engineering partners, crafting innovative software solutions. Challenge offshore development stereotypes by delivering exceptional quality, and proving the value of craftsmanship. Empower clients to deliver value quickly and frequently to their end users. Ensure long-term success for our clients by building reliable, sustainable, and impactful solutions. Raise the bar of software craft by setting a new standard for the community. Job Description This is a remote position. Our Core Values Quality with Pragmatism: We aim for excellence with a focus on practical solutions. Extreme Ownership: We own our work and its outcomes fully. Proactive Collaboration: Teamwork elevates us all. Pursuit of Mastery: Continuous growth drives us. Effective Feedback: Honest, constructive feedback fosters improvement. Client Success: Our clients’ success is our success. Experience Level This role is ideal for engineers with 3+ years of hands-on software development experience, particularly in ​Python and Airflow at scale. Role Overview If you’re a Software Craftsperson who takes pride in clean, test-driven code and believes in Extreme Programming principles, we’d love to meet you. At Incubyte, we’re a DevOps organization where developers own the entire release cycle, meaning you’ll get hands-on experience across programming, cloud infrastructure, client communication, and everything in between. Ready to level up your craft and join a team that’s as quality-obsessed as you are? Read on! What You'll Do Write Tests First: Start by writing tests to ensure code quality Clean Code: Produce self-explanatory, clean code with predictable results Frequent Releases: Make frequent, small releases Pair Programming: Work in pairs for better results Peer Reviews: Conduct peer code reviews for continuous improvement Product Team: Collaborate in a product team to build and rapidly roll out new features and fixes Full Stack Ownership: Handle everything from the front end to the back end, including infrastructure and DevOps pipelines Never Stop Learning: Commit to continuous learning and improvement Requirements What We're Looking For 3+ years of Object-Oriented Programming with Python or equivalent Experience with Airflow. Proficiency in some or all of the following: ReactJS, JavaScript, Object Oriented Programming in JS 3+ years of experience working with relational (SQL) databases 3+ years of experience using Git to contribute code as part of a team of Software Craftspeople Benefits What We Offer Dedicated Learning & Development Budget: Fuel your growth with a budget dedicated solely to learning. Conference Talks Sponsorship: Amplify your voice! If you’re speaking at a conference, we’ll fully sponsor and support your talk. Cutting-Edge Projects: Work on exciting projects with the latest AI technologies Employee-Friendly Leave Policy: Recharge with ample leave options designed for a healthy work-life balance. Comprehensive Medical & Term Insurance: Full coverage for you and your family’s peace of mind. And More: Extra perks to support your well-being and professional growth. Work Environment Remote-First Culture: At Incubyte, we thrive on a culture of structured flexibility β€” while you have control over where and how you work, everyone commits to a consistent rhythm that supports their team during core working hours for smooth collaboration and timely project delivery. By striking the perfect balance between freedom and responsibility, we enable ourselves to deliver high-quality standards our customers recognize us by. With asynchronous tools and push for active participation, we foster a vibrant, hands-on environment where each team member’s engagement and contributions drive impactful results. Work-In-Person: Twice a year, we come together for two-week sprints to collaborate in person, foster stronger team bonds, and align on goals. Additionally, we host an annual retreat to recharge and connect as a team. All travel expenses are covered. Proactive Collaboration: Collaboration is central to our work. Through daily pair programming sessions, we focus on mentorship, continuous learning, and shared problem-solving. This hands-on approach keeps us innovative and aligned as a team. Incubyte is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description Role Proficiency: This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required. Outcomes Act creatively to develop pipelines/applications by selecting appropriate technical options optimizing application development maintenance and performance through design patterns and reusing proven solutions. Support the Project Manager in day-to-day project execution and account for the developmental activities of others. Interpret requirements create optimal architecture and design solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code using best standards debug and test solutions to ensure best-in-class quality. Tune performance of code and align it with the appropriate infrastructure understanding cost implications of licenses and infrastructure. Create data schemas and models effectively. Develop and manage data storage solutions including relational databases NoSQL databases Delta Lakes and data lakes. Validate results with user representatives integrating the overall solution. Influence and enhance customer satisfaction and employee engagement within project teams. Measures Of Outcomes TeamOne's Adherence to engineering processes and standards TeamOne's Adherence to schedule / timelines TeamOne's Adhere to SLAs where applicable TeamOne's # of defects post delivery TeamOne's # of non-compliance issues TeamOne's Reduction of reoccurrence of known defects TeamOne's Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). TeamOne's Average time to detect respond to and resolve pipeline failures or data issues. TeamOne's Number of data security incidents or compliance breaches. Code Outputs Expected: Develop data processing code with guidance ensuring performance and scalability requirements are met. Define coding standards templates and checklists. Review code for team and peers. Documentation Create/review templates checklists guidelines and standards for design/process/development. Create/review deliverable documents including design documents architecture documents infra costing business requirements source-target mappings test cases and results. Configure Define and govern the configuration management plan. Ensure compliance from the team. Test Review/create unit test cases scenarios and execution. Review test plans and strategies created by the testing team. Provide clarifications to the testing team. Domain Relevance Advise data engineers on the design and development of features and components leveraging a deeper understanding of business needs. Learn more about the customer domain and identify opportunities to add value. Complete relevant domain certifications. Manage Project Support the Project Manager with project inputs. Provide inputs on project plans or sprints as needed. Manage the delivery of modules. Manage Defects Perform defect root cause analysis (RCA) and mitigation. Identify defect trends and implement proactive measures to improve quality. Estimate Create and provide input for effort and size estimation and plan resources for projects. Manage Knowledge Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Execute and monitor the release process. Design Contribute to the creation of design (HLD LLD SAD)/architecture for applications business components and data models. Interface With Customer Clarify requirements and provide guidance to the Development Team. Present design options to customers. Conduct product demos. Collaborate closely with customer architects to finalize designs. Manage Team Set FAST goals and provide feedback. Understand team members' aspirations and provide guidance and opportunities. Ensure team members are upskilled. Engage the team in projects. Proactively identify attrition risks and collaborate with BSE on retention measures. Certifications Obtain relevant domain and technology certifications. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning. Experience in data warehouse design and cost improvements. Apply and optimize data models for efficient storage retrieval and processing of large datasets. Communicate and explain design/development aspects to customers. Estimate time and resource requirements for developing/debugging features/components. Participate in RFP responses and solutioning. Mentor team members and guide them in relevant upskilling and certification. Knowledge Examples Knowledge Examples Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF. Proficient in SQL for analytics and windowing functions. Understanding of data schemas and models. Familiarity with domain-related data. Knowledge of data warehouse optimization techniques. Understanding of data security concepts. Awareness of patterns frameworks and automation practices. Additional Comments Role Proficiency: This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required. Outcomes: Act creatively to develop pipelines/applications by selecting appropriate technical options optimizing application development maintenance and performance through design patterns and reusing proven solutions. Support the Project Manager in day-to-day project execution and account for the developmental activities of others. Interpret requirements create optimal architecture and design solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code using best standards debug and test solutions to ensure best-in-class quality. Tune performance of code and align it with the appropriate infrastructure understanding cost implications of licenses and infrastructure. Create data schemas and models effectively. Develop and manage data storage solutions including relational databases NoSQL databases Delta Lakes and data lakes. Validate results with user representatives integrating the overall solution. Influence and enhance customer satisfaction and employee engagement within project teams. Measures of Outcomes: TeamOne's Adherence to engineering processes and standards TeamOne's Adherence to schedule / timelines TeamOne's Adhere to SLAs where applicable TeamOne's # of defects post delivery TeamOne's # of non-compliance issues TeamOne's Reduction of reoccurrence of known defects TeamOne's Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). TeamOne's Average time to detect respond to and resolve pipeline failures or data issues. TeamOne's Number of data security incidents or compliance breaches. Outputs Expected: Code: Develop data processing code with guidance ensuring performance and scalability requirements are met. Define coding standards templates and checklists. Review code for team and peers. Documentation: Create/review templates checklists guidelines and standards for design/process/development. Create/review deliverable documents including design documents architecture documents infra costing business requirements source-target mappings test cases and results. Configure: Define and govern the configuration management plan. Ensure compliance from the team. Test: Review/create unit test cases scenarios and execution. Review test plans and strategies created by the testing team. Provide clarifications to the testing team. Domain Relevance: Advise data engineers on the design and development of features and components leveraging a deeper understanding of business needs. Learn more about the customer domain and identify opportunities to add value. Complete relevant domain certifications. Manage Project: Support the Project Manager with project inputs. Provide inputs on project plans or sprints as needed. Manage the delivery of modules. Manage Defects: Perform defect root cause analysis (RCA) and mitigation. Identify defect trends and implement proactive measures to improve quality. Estimate: Create and provide input for effort and size estimation and plan resources for projects. Manage Knowledge: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release: Execute and monitor the release process. Design: Contribute to the creation of design (HLD LLD SAD)/architecture for applications business components and data models. Interface with Customer: Clarify requirements and provide guidance to the Development Team. Present design options to customers. Conduct product demos. Collaborate closely with customer architects to finalize designs. Manage Team: Set FAST goals and provide feedback. Understand team members' aspirations and provide guidance and opportunities. Ensure team members are upskilled. Engage the team in projects. Proactively identify attrition risks and collaborate with BSE on retention measures. Certifications: Obtain relevant domain and technology certifications. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Skills scala,Python,Pyspark Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Maersk, the world's largest shipping company, is transforming into an industrial digital giant that enables global trade with its land, sea and port assets. We are the digital and software development organization that builds products in the areas of predictive science, optimization and IoT. This position offers the opportunity to build your engineering career in a data and analytics intensive environment, delivering work that has direct and significant impact on the success of our company. Global data analytics delivers internal apps to grow revenue and optimize costs across Maersk's business units. We practice agile development in teams empowered to deliver products end-to-end, for which data and analytics are crucial assets. This is an extremely exciting time to join a fast paced, growing and dynamic team that solves some of the toughest problems in the industry and builds the future of trade & logistics. We are an open-minded, friendly and supportive group who strive for excellence together A.P. Moller - Maersk maintains a strong focus on career development, and strong team members regularly have broad possibilities to expand their skill set and impact in an environment characterized by change and continuous progress. The team - who are we: We are an ambitious team with the shared passion to use data, machine learning (ML) and engineering excellence to make a difference for our customers. We are a team, not a collection of individuals. We value our diverse backgrounds, our different personalities and strengths & weaknesses. We value trust and passionate debates. We challenge each other and hold each other accountable. We uphold a caring feedback culture to help each other grow, professionally and personally. We are now seeking a new team member who is excited about using experiments at scale and ML-driven personalisation to create a seamless experience for our users, helping them find the products and content they didn’t even know they were looking for, and drive engagement and business value. Our new member - who are you You are driven by curiosity and are passionate about partnering with a diverse range of business and tech colleagues to deeply understand their customers, uncover new opportunities, advise and support them in design, execution and analysis of experiments, or to develop ML solutions for ML-driven personalisation (e.g., supervised or unsupervised) that drive substantial customer and business impact. You will use your expertise in experiment design, data science, causal inference and machine learning to stimulate data-driven innovation. This is an incredibly exciting role with high impact. You are, like us, a team player who cares about your team members, about growing professionally and personally, about helping your teammates grow, and about having fun together. Basic Qualifications: Bachelor’s or master’s degree in computer science, Software Engineering, Data Science, or related field 3–5 years of professional experience in designing, building, and maintaining scalable data pipelines, both in on-premises and cloud (Azure preferred) environments. Strong expertise in working with large datasets from Salesforce, port operations, cargo tracking, and enterprise systems etc. Proficient writing scalable and high-quality SQL queries, Python coding and object-oriented programming, with a solid grasp of data structures and algorithms. Experience in software engineering best practices, including version control (Git), CI/CD pipelines, code reviews, and writing unit/integration tests. Familiarity with containerization and orchestration tools (Docker, Kubernetes) for data workflows and microservices. Hands-on experience with distributed data systems (e.g., Spark, Kafka, Delta Lake, Hadoop). Experience in data modelling, and workflow orchestration tools like Airflow Ability to support ML engineers and data scientists by building production-grade data pipelines Demonstrated experience collaborating with product managers, domain experts, and stakeholders to translate business needs into robust data infrastructure. Strong analytical and problem-solving skills, with the ability to work in a fast-paced, global, and cross-functional environment. Preferred Qualifications: Experience deploying data solutions in enterprise-grade environments, especially in the shipping, logistics, or supply chain domain. Familiarity with Databricks, Azure Data Factory, Azure Synapse, or similar cloud-native data tools. Knowledge of MLOps practices, including model versioning, monitoring, and data drift detection. Experience building or maintaining RESTful APIs for internal ML/data services using FastAPI, Flask, or similar frameworks. Working knowledge of ML concepts, such as supervised learning, model evaluation, and retraining workflows. Understanding of data governance, security, and compliance practices. Passion for clean code, automation, and continuously improving data engineering systems to support machine learning and analytics at scale. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com. Show more Show less

Posted 1 week ago

Apply

9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Company Description Sandisk understands how people and businesses consume data and we relentlessly innovate to deliver solutions that enable today’s needs and tomorrow’s next big ideas. With a rich history of groundbreaking innovations in Flash and advanced memory technologies, our solutions have become the beating heart of the digital world we’re living in and that we have the power to shape. Sandisk meets people and businesses at the intersection of their aspirations and the moment, enabling them to keep moving and pushing possibility forward. We do this through the balance of our powerhouse manufacturing capabilities and our industry-leading portfolio of products that are recognized globally for innovation, performance and quality. Sandisk has two facilities recognized by the World Economic Forum as part of the Global Lighthouse Network for advanced 4IR innovations. These facilities were also recognized as Sustainability Lighthouses for breakthroughs in efficient operations. With our global reach, we ensure the global supply chain has access to the Flash memory it needs to keep our world moving forward. Job Description We are seeking a passionate candidate dedicated to building robust data pipelines and handling large-scale data processing. The ideal candidate will thrive in a dynamic environment and demonstrate a commitment to optimizing and maintaining efficient data workflows. The ideal candidate will have hands-on experience with Python, MariaDB, SQL, Linux, Docker, Airflow administration, and CI/CD pipeline creation and maintenance. The application is built using Python Dash, and the role will involve application deployment, server administration, and ensuring the smooth operation and upgrading of the application. Key Responsibilities Minimum of 9+ years of experience in developing data pipelines using Spark. Ability to design, develop, and optimize Apache Spark applications for large-scale data processing. Ability to implement efficient data transformation and manipulation logic using Spark RDDs and Data Frames. Manage server administration tasks, including monitoring, troubleshooting, and optimizing performance. Administer and manage databases (MariaDB) to ensure data integrity and availability. Ability to design, implement, and maintain Apache Kafka pipelines for real-time data streaming and event-driven architectures. Development and deep technical skill in Python, PySpark, Scala and SQL/Procedure. Working knowledge and understanding on Unix/Linux operating system like awk, ssh, crontab, etc., Ability to write transact SQL, develop and debug stored procedures and user defined functions in python. Working experience on Postgres and/or Redshift/Snowflake database is required. Exposure to CI/CD tools like bit bucket, Jenkins, ansible, docker, Kubernetes etc. is preferred. Ability to understand relational database systems and its concepts. Ability to handle large table/dataset of 2+TB in a columnar database environment. Ability to integrate data pipelines with Splunk/Grafana for real-time monitoring, analysis, and Power BI visualization. Ability to create and schedule the Airflow Jobs. Qualifications Minimum of a bachelor’s degree in computer science or engineering. Master’s degree preferred. AWS developer certification will be preferred. Any certification on SDLC (Software Development Life Cycle) methodology, integrated source control system, continuous development and continuous integration will be preferred. Additional Information Sandisk thrives on the power and potential of diversity. As a global company, we believe the most effective way to embrace the diversity of our customers and communities is to mirror it from within. We believe the fusion of various perspectives results in the best outcomes for our employees, our company, our customers, and the world around us. We are committed to an inclusive environment where every individual can thrive through a sense of belonging, respect and contribution. Sandisk is committed to offering opportunities to applicants with disabilities and ensuring all candidates can successfully navigate our careers website and our hiring process. Please contact us at jobs.accommodations@sandisk.com to advise us of your accommodation request. In your email, please include a description of the specific accommodation you are requesting as well as the job title and requisition number of the position for which you are applying. Show more Show less

Posted 1 week ago

Apply

130.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Senior Manager, Tech Lead The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview Join our company as we transform and innovate. We are at the forefront of research to deliver innovative health solutions that advance the prevention and treatment of diseases in people and animals. We are currently seeking a Technology Engineer to help deliver our Data and Analytics Platform product. This is also an exciting opportunity to contribute to the development of our broader company's Data and Analytics practice inside our team. What Will You Do In This Role Contributes to architecture, design, and engineering of Global Support Functions Data Engineering, Data Integration and Data Visualization service. Contributes to architecture, design, and engineering of global data engineering/integration/visualization services. Defines best practices and guidelines for delivery of Analytic solutions. Consumes practices from and collaborates to improve practices amongst our emerging engineering community. Contributes to identifying capability gaps in product capabilities and designing solutions to address it. Executes on opportunities to automate and simplify maintenance and lifecycle of platform services. Maintains current industry knowledge of cloud native concepts, best practices, and technologies. Prioritizes workloads, commitments, and scheduled timelines. Frequent interaction with product managers and engineering teams to onboard their new delivery to our central platforms. Documents, reviews, and ensures that all System Development Lifecycle (SDLC) and company policy standards are met. Provide point of escalation for product customer support team. What Should You Have BS Degree or equivalent in Computer Science, Computer Engineering, Information Systems, or equivalent experience. Required Relevant certification or completion of equivalent program in areas such as Software Development, Computer Science, or Computer Engineering. Hands on experience with various data engineering/integration/BI platforms (e.g. AWS Glue, Athena, S3 (storage), Apache Airflow, Redshift, Snowflake, Databricks, Collibra) Understanding of Cloud Service Providers (e.g. AWS, Azure, etc.) Understanding of web and network protocols such as HTTP/S, TCP/IP, DNS. Understanding of basic routing concepts. Experience with scripting language such as Python or Unix shell scripts with strong focus towards automation. Experience with software solution design and documentation. Strong knowledge and experience in IT, specifically in designing, developing, modifying, and implementing solutions. Proficiency in working with both new and existing applications, systems architecture, and network systems. Ability to review and understand system requirements and business processes. Proficient in coding, testing, debugging, and implementing software solutions. Expertise in the engineering, delivery, and management of Cloud solutions, including Cloud platform and Cloud native services. Experience in monitoring the consumption of Cloud resources and managing application performance. Ability to oversee request fulfillment turnaround efficiently. Strong understanding of maintaining system security posture. Strong leadership skills including but not limited to strategic planning, entrepreneurship, innovation, and business savviness. Strong commitment to diversity, equity, and inclusion, and have the ability to influence and motivate others. Excellent emotional intelligence, decision-making skills, and a strong sense of ownership and accountability. Preferred 6 to 8 years of experience in IT field and/or related program. Ability to work in a matrixed and highly concurrent environment. Demonstrated ability to plan and execute on a project or experiment, including milestones and endpoints. Experience working as part a global, diverse team. Experience using, implementing and/or operating Data Warehousing or broad range analytic solutions. Experience with Amazon Web Services such as VPC, Route53, EC2, ALB, S3, RDS, S3, IAM, etc. Relevant certification (e.g. AWS, Azure, etc.) Experience debugging software and/or scripting errors. Experience with Go programming language. Experience with infrastructure, network, database, or security troubleshooting. Experience delivering products and features using Agile/Scrum methodologies. Experience with DevOps tools such as git, Terraform, Jira, Jenkins, CloudBees, GitHub Actions, etc. Experience in System Development Lifecycle (SDLC) documentation. Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join usβ€”and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Design Applications, Information Management, Software Development, Software Development Life Cycle (SDLC), System Designs Preferred Skills Job Posting End Date 06/9/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R342324 Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Tata Consultancy Services is hiring Python Full stack Developers !!! Role**Python Full stack Developer Desired Experience Range**6-8 Years Location of Requirement**Hyderabad Desired Skills -Technical/Behavioral Primary Skill Frontend o 6+ years of overall experience with proficiency in React (2+ years), Typescript (1+ year), React hooks (1+ year) o Experience with ESlint, CSS in JS styling (preferably Emotion), state management (preferably Redux), and JavaScript bundlers such as Webpack o Experience with integrating with RESTful APIs or other web services Backend o Expertise with Python (3+ years, preferably Python3) o Proficiency with a Python web framework (2+ years, preferably flask and FastAPI) o Experience with a Python linter (preferably flake8), graph databases (preferably Neo4j), a package manager (preferably pip), Elasticsearch, and Airflow o Experience with developing microservices, RESTful APIs or other web services o Experience with Database design and management, including NoSQL/RDBMS tradeoffs Interested and Eligible candidates can apply Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be adept at using ETL tools such as Informatica Glue Databricks and DataProc with coding skills in Python PySpark and SQL. Works independently and demonstrates proficiency in at least one domain related to data with a solid understanding of SCD concepts and data warehousing principles. Outcomes Collaborate closely with data analysts data scientists and other stakeholders to ensure data accessibility quality and security across various data sources.rnDesign develop and maintain data pipelines that collect process and transform large volumes of data from various sources. Implement ETL (Extract Transform Load) processes to facilitate efficient data movement and transformation. Integrate data from multiple sources including databases APIs cloud services and third-party data providers. Establish data quality checks and validation procedures to ensure data accuracy completeness and consistency. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Measures Of Outcomes Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Outputs Expected Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Documentation Create documentation for personal work and review deliverable documents including source-target mappings test cases and results. Configuration Follow configuration processes diligently. Testing Create and conduct unit tests for data pipelines and transformations to ensure data quality and correctness. Validate the accuracy and performance of data processes. Domain Relevance Develop features and components with a solid understanding of the business problems being addressed for the client. Understand data schemas in relation to domain-specific contexts such as EDI formats. Defect Management Raise fix and retest defects in accordance with project standards. Estimation Estimate time effort and resource dependencies for personal work. Knowledge Management Consume and contribute to project-related documents SharePoint libraries and client universities. Design Understanding Understand design and low-level design (LLD) and link it to requirements and user stories. Certifications Obtain relevant technology certifications to enhance skills and knowledge. Skill Examples Proficiency in SQL Python or other programming languages utilized for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning data processes. Proficiency in querying data warehouses. Knowledge Examples Knowledge Examples Knowledge of various ETL services provided by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow and Azure ADF/ADLF. Understanding of data warehousing principles and practices. Proficiency in SQL for analytics including windowing functions. Familiarity with data schemas and models. Understanding of domain-related data and its implications. Additional Comments Design, develop, and maintain data pipelines and architectures using Azure services. Collaborate with data scientists and analysts to meet data needs. Optimize data systems for performance and reliability. Monitor and troubleshoot data storage and processing issues. Responsibilities Design, develop, and maintain data pipelines and architectures using Azure services. Collaborate with data scientists and analysts to meet data needs. Optimize data systems for performance and reliability. Monitor and troubleshoot data storage and processing issues. Ensure data security and compliance with company policies. Document data solutions and architecture for future reference. Stay updated with Azure data engineering best practices and tools. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. 3+ years of experience in data engineering. Proficiency in Azure Data Factory, Azure SQL Database, and Azure Databricks. Experience with data modeling and ETL processes. Strong understanding of database management and data warehousing concepts. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Skills Azure Data Factory Azure SQL Database Azure Databricks ETL Data Modeling SQL Python Big Data Technologies Data Warehousing Azure DevOps Skills Azure,Aws,Aws Cloud,Azure Cloud Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Build the future of the AI Data Cloud. Join the Snowflake team. Job Description We are looking for a Senior Analytics Engineer to join our growing Finance data team. In this role, you will drive value and empower decision-making by developing and maintaining the data infrastructure which fuels reporting and analysis for the Finance organization and has a direct impact on Snowflake’s success as a company. As a Senior Analytics Engineer, you will be responsible for the following: Use SQL, Python, Snowflake, dbt, Airflow, and other systems while working within an agile development model to build and maintain data infrastructure for use in reporting, analysis, and automation Perform data QA and develop automated testing procedures for use with Snowflake data models Translate reporting, analysis, and automation requirements into data model requirements and specifications Work with IT and other technical stakeholders to source data from key business systems and Snowflake databases Architect flexible, performant data models that will support a wide range of use cases while driving the organization towards single sources of truth Provide input into data governance strategies and frameworks including permissions and security models, data lineage systems, and data definitions Meet regularly with Finance BI and technical leads to define requirements, formulate project plans, provide status updates, and perform user testing and QA Build and maintain user friendly documentation for data models and key metrics Identify weaknesses in processes, data, and systems, and drive organizational improvements within the Analytics Engineering team What You Will Need Required skills: 5+ years of experience working as an analytics, data, or BI engineer Advanced SQL skills with experience standardizing queries and building data infrastructure involving large-scale relational datasets Experience using Python to parse, structure, and transform data Experience with MPP databases such as Snowflake, Redshift, BigQuery, or other relevant technologies Ability to communicate in an effective and efficient manner while working with a wide range of stakeholders Ability to prioritize and execute tasks in a high-pressure, constantly changing environment Ability to think creatively to solve problems Passion for detail and quality, an ability to identify weaknesses in data and process, and a willingness to drive improvement Willingness to work flexible international hours as needed Experience with Excel, ERP, Salesforce, and Financial Planning tools Snowflake is growing fast, and we’re scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact? For jobs located in the United States, please visit the job posting on the Snowflake Careers Site for salary and benefits information: careers.snowflake.com Show more Show less

Posted 1 week ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Essential Responsibilities: As a Senior Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 8 years of software engineering experience. An undergraduate degree in Computer Science (or a related field) from a university where the primary language of instruction is English is strongly desired. 2+ Years of Experience/Fluency in Python Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive. Experience working with container-based solutions is a plus. Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring. Experience in Advertising Attribution domain is a plus Experience in agile software development processes Excellent interpersonal and communication skills

Posted 1 week ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities On your first day, we'll expect you to have: BS in Computer Science or equivalent experience with 3+ years as a Data Engineer or a similar role Programming skills in Python & Java (good to have) Design data models for storage and retrieval to meet product and requirements Build scalable data pipelines using Spark, Airflow, AWS data services (Redshift, Athena, EMR), Apache projects (Spark, Flink, Hive, and Kafka) Familiar with modern software development practices (Agile, TDD, CICD) applied to data engineering Enhance data quality through internal tools/frameworks detecting DQ issues. Working knowledge of relational databases and SQL query authoring We'd Be Super Excited If You Have Followed a Kappa architecture with any of your previous deployments and domain knowledge of Finance/Financial Systems Qualifications Our perks & benefits Atlassian offers a variety of perks and benefits to support you, your family and to help you engage with your local community. Our offerings include health coverage, paid volunteer days, wellness resources, and so much more. Visit go.atlassian.com/perksandbenefits to learn more. About Atlassian At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh . Show more Show less

Posted 1 week ago

Apply

4.0 - 6.0 years

4 - 6 Lacs

Gurgaon / Gurugram, Haryana, India

On-site

Foundit logo

As part of the Big Data B2B program, OBS set up a shared Big Data platform and a Data Lake for Use cases exploration and industrialization. We are looking for Senior Data Engineer with 4-6 years of experience in building Data Pipelines on Prem and Cloud with below KRA(S): Automate, Industrialize the build and development tasks. Lead discussion sessions with stakeholders. Participate in all areas of the data Engineering life-cycle and lead the team in requirements gathering and data mapping, systems design, data ingestion development, preparing data mapping documentation, testing and deployment, post implementation support and monitoring. Resolve and troubleshoot problems and complex issues. Provide innovative solutions to complex business problems. Report to and work closely with project teams and Business Analysis team on project delivery status. Prepare progress update and status report. Provide operational support, ongoing maintenance and enhancement after implementation as part of Run Management Activities. Implementing Best Data Integration Practices. Good understanding of Big Data Ecosystem with frameworks like HADOOP, SPARK. Good experience handling large volume data as well as both structured/unstructured data in Streaming and Batch modes. High Coding proficiency in at least one modern programming language: Python, Java or Scala. Hands on Experience on NiFi , Hive, SQL/HQL, Spark sql,Spark Steaming, Oozie, Airflow. Good Understanding of Data Integration Patterns. Good understanding of KAFKA, Rabbit MQ, AIR FLOW. Good understanding of API concepts : REST and also microservices architecture Experience of Devops tooling: Jenkins, Maven, GitLab, SonarQube, Docker. Good Understanding of Devops Concepts and various technologies like Kubernetes, Dockers, Containers. Good understanding of ELK stack . Good Understanding of Monitoring tools like Prometheus, Grafanna etc. Good Understanding of Cloud architecture is must. Good to be professionally certified in any of the Hyperscalers especially GCP. Full Understanding of Compute, Network and Storage Services of GCP. Proficiency in GCP services such as BigQuery, Dataflow, Pub/Sub, Dataproc, and Bigtable. Good Understanding of Linux and Shell Scripting Good Experience of AGILE methods (Scrum, Kanban) Good understanding of JIRA. Understanding of Data Modelling is value addition. Understanding of Open Digital Architecture and TMF principles is preferrable. Understanding of Tools Like DSS, Jupyter Notebook is preferrable

Posted 1 week ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Who We Are: Wayfair’s Advertising business is rapidly expanding, adding hundreds of millions of dollars in profits to Wayfair. We are building Sponsored Products, Display & Video Ad offerings that cater to a variety of Advertiser goals while showing highly relevant and engaging Ads to millions of customers. We are evolving our Ads Platform to empower advertisers across all sophistication levels to grow their business on Wayfair at a strong, positive ROI and are leveraging state of the art Machine Learning techniques. The Advertising Optimisation & Automation Science team is central to this effort. We leverage machine learning and generative AI to streamline campaign workflows, delivering impactful recommendations on budget allocation, target Return on Ad Spend (tROAS), and SKU selection. Additionally, we are developing intelligent systems for creative optimization and exploring agentic frameworks to further simplify and enhance advertiser interactions. We are seeking an experienced Senior Machine Learning Manager to lead a growing team focused on building intelligent, ML-powered systems that drive personalized recommendations and campaign automation within Wayfair’s advertising platform. In this role, you will have the opportunity to define and deliver 0-to-1 capabilities that unlock substantial commercial value and directly enhance advertiser outcomes. ​Working closely with a high-performing team of ML scientists and engineers, you'll tackle some of Wayfair's most intellectually challenging problems in machine learning, latency, and scalability, while directly contributing to the company's bottom line. What You’ll do: Own the strategy, roadmap, and execution of supplier advertising intelligence and automation solutions. Lead the next phase of GenAI-powered creative optimization, ML-based recommendation systems, and campaign automation to drive significant incremental ad revenue and improve supplier outcomes. Build, coach, and manage a team of ML scientists focused on developing intelligent budget, tROAS, and SKU recommendations, creative testing frameworks, and simulation-driven decisioning that extends beyond the current platform capabilities. Partner cross-functionally with Product, Engineering, and Sales to deliver scalable ML solutions that improve supplier campaign performance. Ensure systems are designed for reuse, extensibility, and long-term impact across multiple advertising workflows. Research and apply best practices in advertising science, GenAI applications in creative personalisation, and auction modeling. Keep Wayfair at the forefront of innovation in supplier marketing optimisation. Collaborate with Engineering teams (AdTech, ML Platform, Campaign Management) to build and scale the infrastructure needed for automated, intelligent advertising decisioning. Act as SME and provide mentorship and technical guidance beyond own team on the broader DS/Eng org when needed What You’ll need: Bachelor's or Master’s degree in Computer Science, Mathematics, Statistics, or related field. 10+ years of industry experience, with at least 1-2 years experience as a manager of teams and 5+ serving as an IC on production ML systems. Strategic thinker with a customer-centric mindset and a desire for creative problem solving, looking to make a big impact in a growing organisation Demonstrated success influencing senior level stakeholders on strategic direction based on recommendations backed by in-depth analysis; Excellent written and verbal communication Ability to partner cross-functionally to own and shape technical roadmaps and the organizations required to drive them. Proficient in one or more programming languages, e.g. Python, Golang, Rust etc. Nice to have: Experience with GCP, Airflow, and containerization (Docker) Experience building scalable data processing pipelines with big data tools such as Hadoop, Hive, SQL, Spark, etc. Familiarity with Generative AI and agentic workflows Experience in Bayesian Learning, Multi-armed Bandits, or Reinforcement Learning Show more Show less

Posted 1 week ago

Apply

6.0 - 9.0 years

20 - 25 Lacs

Hyderabad

Hybrid

Naukri logo

Role & re Design, build, and measure complex ELT jobs to process disparate data sources and form a high integrity, high quality, clean data asset. Executes and provides feedback for data modeling policies, procedure, processes, and standards. Assists with capturing and documenting system flow and other pertinent technical information about data, database design, and systems. Develop data quality standards and tools for ensuring accuracy. Work across departments to understand new data patterns. Translate high-level business requirements into technical specs sponsibilities Bachelors degree in computer science or engineering. years of experience with data analytics, data modeling, and database design. years of experience with Vertica. years of coding and scripting (Python, Java, Scala) and design experience. years of experience with Airflow. Experience with ELT methodologies and tools. Experience with GitHub. Expertise in tuning and troubleshooting SQL. Strong data integrity, analytical and multitasking skills. Excellent communication, problem solving, organizational and analytical skills. Able to work independently. Additional / preferred skills: Familiar with agile project delivery process. Knowledge of SQL and use in data access and analysis. Ability to manage diverse projects impacting multiple roles and processes. Able to troubleshoot problem areas and identify data gaps and issues. Ability to adapt to fast changing environment. Experience designing and implementing automated ETL processes. Experience with MicroStrategy reporting tool. Preferred candidate profile

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities Data is a BIG deal at Atlassian. We ingest over 180 billion events each month into our analytics platform and we have dozens of teams across the company driving their decisions and guiding their operations based on the data and services we provide. The data engineering team manages several data models and data pipelines across Atlassian, including finance, growth, product analysis, customer support, sales, and marketing. You'll join a team that is smart and very direct. We ask hard questions and challenge each other to constantly improve our work. As a Data Engineer, you will apply your technical expertise to build analytical data models that support a broad range of analytical requirements across the company. You will work with extended teams to evolve solutions as business processes and requirements change. You'll own problems end-to-end and on an ongoing basis, you'll improve the data by adding new sources, coding business rules, and producing new metrics that support the business. Qualifications BS/BA in Computer Science, Engineering, Information Management, or other technical fields and 4+ years of data engineering experience Strong programming skills using Python or Java. Working knowledge of relational databases and query authoring via SQL. Experience designing data models for optimal storage and retrieval to meet product and business requirements. Experience building scalable data pipelines using Spark (SparkSQL) with Airflow scheduler/executor framework or similar scheduling tools. Experience building real-time data pipelines using a micro-services architecture. Experience working with AWS data services or similar Apache projects (Spark, Flink, Hive, and Kafka). Understanding of Data Engineering tools/frameworks and standards to improve the productivity and quality of output for Data Engineers across the team. Well-versed in modern software development practices (Agile, TDD, CICD). A willingness to accept failure, learn, and try again Our Perks & Benefits Atlassian offers a variety of perks and benefits to support you, your family and to help you engage with your local community. Our offerings include health coverage, paid volunteer days, wellness resources, and so much more. Visit go.atlassian.com/perksandbenefits to learn more. About Atlassian At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh . Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

9 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Roles and Responsibilities Lead the design, development, and maintenance of data pipelines and ETL processes architect and implement scalable data solutions using Databricks and AWS. Optimize data storage and retrieval systems using Rockset, Clickhouse, and CrateDB. Develop and maintain data APIs using FastAPI. Orchestrate and automate data workflows using Airflow. Collaborate with data scientists and analysts to support their data needs. Ensure data quality, security, and compliance across all data systems. Mentor junior data engineers and promote best practices in data engineering. Evaluate and implement new data technologies to improve the data infrastructure. Participate in cross-functional projects and provide technical leadership. Manage and optimize data storage solutions using AWS S3, implementing best practices for data lakes and data warehouses. Private and Confidential www.fissionlabs.com info@fissionlabs.com Implement and manage Databricks Unity Catalog for centralized data governance and access control across the organization. Qualifications Required Bachelor's or Master's degree in Computer Science, Engineering, or related field 5+ years of experience in data engineering, with at least 2-3 years in a lead role Strong proficiency in Python, PySpark, and SQL Extensive experience with Databricks and AWS cloud services Hands-on experience with Airflow for workflow orchestration Familiarity with FastAPI for building high-performance APIs Experience with columnar databases like Rockset, Clickhouse, and CrateDB Solid understanding of data modeling, data warehousing, and ETL processes Experience with version control systems (e.g., Git) and CI/CD pipelines Excellent problem-solving skills and ability to work in a fast-paced environment Strong communication skills and ability to work effectively in cross-functional teams Knowledge of data governance, security, and compliance best practices Proficiency in designing and implementing data lake architectures using AWS S3 Experience with Databricks Unity Catalog or similar data governance and metadata management tools Skills and Experience Required Tech Stack Databricks, Python, PySpark, SQL, Airflow, FastAPI, AWS (S3, IAM, ECR, Lambda), Rockset, Clickhouse, CrateDB Why you'll love working with us: Opportunity to work on business challenges from top global clientele with high impact. Vast opportunities for self-development, including online university access and sponsored certifications. Sponsored Tech Talks, industry events & seminars to foster innovation and learning. Generous benefits package including health insurance, retirement benefits, flexible work hours, and more. Supportive work environment with forums to explore passions beyond work. This role presents an exciting opportunity for a motivated individual to contribute to the development of cutting-edge solutions while advancing their career in a dynamic and collaborative environment.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities On your first day, we'll expect you to have: BS in Computer Science or equivalent experience with 3+ years as a Data Engineer or a similar role Programming skills in Python & Java (good to have) Design data models for storage and retrieval to meet product and requirements Build scalable data pipelines using Spark, Airflow, AWS data services (Redshift, Athena, EMR), Apache projects (Spark, Flink, Hive, and Kafka) Familiar with modern software development practices (Agile, TDD, CICD) applied to data engineering Enhance data quality through internal tools/frameworks detecting DQ issues. Working knowledge of relational databases and SQL query authoring We'd Be Super Excited If You Have Followed a Kappa architecture with any of your previous deployments and domain knowledge of Finance/Financial Systems Qualifications Our perks & benefits Atlassian offers a variety of perks and benefits to support you, your family and to help you engage with your local community. Our offerings include health coverage, paid volunteer days, wellness resources, and so much more. Visit go.atlassian.com/perksandbenefits to learn more. About Atlassian At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh . Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Hiring for AWS Data Engineer with Fast API Immediate Joiners Pune and Chennai location 7-10 years experience Share profiles to neha.sandurea@godoublu.com We are seeking a skilled and motivated AWS Data Engineer with expertise in FastAPI, Pub/Sub messaging systems, and Apache Airflow to build and maintain scalable, cloud-native applications on AWS. The ideal candidate has strong experience in modern Python development and is having strong hands-on experience with event-driven architectures and data workflow orchestration in AWS cloud environments. Required Qualifications: Bachelor’s degree in computer science, data science, or a related technical discipline. 7+ years of hands-on experience in data engineering, include developing ETL/ElT data pipeline, API Integration (Fast API Preferred), data platform/products and or data warehouse. 3+ years of hands-on experience in developing data-intensive solutions on AWS for operational and analytics workloads. 3+ years of experience in designing both ETL/ELT for batch processing and data streaming architectures for real-time or near real-time data ingestion and processing. 3+ years of experience in develop and orchestrate complex data workflows suing Apache Airflow (Mandatory), including DAG Authoring, scheduling, and monitoring. 2+ years of experience in building and managing event-driven microservices using Pub Sub systems (e.g. AWS SNS/SQL , Kafka) 3+ years of hands-on experience in two or more database technologies (e.g., MySQL, PostgreSQL, MongoDB) and data warehouses (e.g., Redshift, BigQuery, or Snowflake), as well as cloud-based data engineering technologies. Proficient in Dashboard/BI and Data visualization tools (eg. Tableau, Quicksight) Develop conceptual, logical, and physical data models using ERDs. Thrives in dynamic, cross-functional team environments. Possesses a team-first mindset, valuing diverse perspectives and contributing to a collaborative work culture. Approaches challenges with a positive and can-do attitude. Willing to challenge the status quo, demonstrating ability to understand when and how to take appropriate risks to drive performance. A passionate problem solver. High learning agility Show more Show less

Posted 1 week ago

Apply

7.0 - 12.0 years

7 - 12 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Foundit logo

Job Summary: We are looking for a highly skilled Data Engineer with hands-on experience in Snowflake, Python, DBT, and modern data architecture. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and data warehouse solutions that support analytics and business intelligence initiatives. Key Responsibilities: Design and implement scalable data pipelines using ETL/ELT frameworks. Develop and maintain data models and data warehouse architecture using Snowflake. Build and manage DBT (Data Build Tool) models for data transformation and lineage tracking. Write efficient and reusable Python scripts for data ingestion, transformation, and automation. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Ensure data quality, integrity, and governance across all data platforms. Monitor and optimize performance of data pipelines and queries. Implement best practices for data engineering, including version control, testing, and CI/CD. Required Skills and Qualifications: 8+ years of experience in data engineering or a related field. Strong expertise in Snowflake including schema design, performance tuning, and security. Proficiency in Python for data manipulation and automation. Solid understanding of data modeling concepts (star/snowflake schema, normalization, etc.). Experience with DBT for data transformation and documentation. Hands-on experience with ETL/ELT tools and orchestration frameworks (e.g., Airflow, Prefect). Strong SQL skills and experience with large-scale data sets. Familiarity with cloud platforms (AWS, Azure, or GCP) and data services.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

About the Team Data is at the foundation of DoorDash success. The Data Engineering team builds database solutions for various use cases including reporting, product analytics, marketing optimization and financial reporting. Team serves as the foundation for decision-making at DoorDash. About the Role DoorDash is looking for a Software Engineer, Data to be a technical powerhouse to help us scale our data reliability, data infrastructure, automation and tools to meet growing business needs. You're excited about this opportunity because you will… Own critical data systems that support multiple products/teams Develop, implement and enforce best practices for data infrastructure and automation Design, develop and implement large scale, high volume, high performance data models and pipelines for Data Lake and Data Warehouse Improve the reliability and scalability of our Ingestion, data processing, ETLs, Reporting tools and data ecosystem services Manage a portfolio of data products that deliver high-quality, trustworthy data Help onboard and support other engineers as they join the team We're excited about you because… 3+ years of professional experience 3+ years experience working in data platform and data engineering or a similar role Proficiency in programming languages such as Python/Kotlin/Scala 3+ years of experience in ETL orchestration and workflow management tools like Airflow Expert in database fundamentals, SQL, data reliability practices and distributed computing 3+ years of experience with the Distributed data/similar ecosystem (Spark, Presto) and streaming technologies such as Kafka/Flink/Spark Streaming Excellent communication skills and experience working with technical and non-technical teams and knowledge of reporting tools Comfortable working in fast paced environment, self starter and self organizing Ability to think strategically, analyze and interpret market and consumer information You must be located near one of our engineering hubs indicated above Notice to Applicants for Jobs Located in NYC or Remote Jobs Associated With Office in NYC Only We use Covey as part of our hiring and/or promotional process for jobs in NYC and certain features may qualify it as an AEDT in NYC. As part of the hiring and/or promotion process, we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound from August 21, 2023, through December 21, 2023, and resumed using Covey Scout for Inbound again on June 29, 2024. The Covey tool has been reviewed by an independent auditor. Results of the audit may be viewed here: Covey About DoorDash At DoorDash, our mission to empower local economies shapes how our team members move quickly, learn, and reiterate in order to make impactful decisions that display empathy for our range of usersβ€”from Dashers to merchant partners to consumers. We are a technology and logistics company that started with door-to-door delivery, and we are looking for team members who can help us go from a company that is known for delivering food to a company that people turn to for any and all goods. DoorDash is growing rapidly and changing constantly, which gives our team members the opportunity to share their unique perspectives, solve new challenges, and own their careers. We're committed to supporting employees' happiness, healthiness, and overall well-being by providing comprehensive benefits and perks. Our Commitment to Diversity and Inclusion We're committed to growing and empowering a more inclusive community within our company, industry, and cities. That's why we hire and cultivate diverse teams of people from all backgrounds, experiences, and perspectives. We believe that true innovation happens when everyone has room at the table and the tools, resources, and opportunity to excel. If you need any accommodations, please inform your recruiting contact upon initial connection. About DoorDash At DoorDash, our mission to empower local economies shapes how our team members move quickly, learn, and reiterate in order to make impactful decisions that display empathy for our range of usersβ€”from Dashers to merchant partners to consumers. We are a technology and logistics company that started with door-to-door delivery, and we are looking for team members who can help us go from a company that is known for delivering food to a company that people turn to for any and all goods. DoorDash is growing rapidly and changing constantly, which gives our team members the opportunity to share their unique perspectives, solve new challenges, and own their careers. We're committed to supporting employees' happiness, healthiness, and overall well-being by providing comprehensive benefits and perks. Our Commitment to Diversity and Inclusion We're committed to growing and empowering a more inclusive community within our company, industry, and cities. That's why we hire and cultivate diverse teams of people from all backgrounds, experiences, and perspectives. We believe that true innovation happens when everyone has room at the table and the tools, resources, and opportunity to excel. If you need any accommodations, please inform your recruiting contact upon initial connection. We use Covey as part of our hiring and/or promotional process for jobs in certain locations. The Covey tool has been reviewed by an independent auditor. Results of the audit may be viewed here: https://getcovey.com/nyc-local-law-144 To request a reasonable accommodation under applicable law or alternate selection process, please inform your recruiting contact upon initial connection. Show more Show less

Posted 1 week ago

Apply

8.0 - 13.0 years

8 - 13 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Foundit logo

8+ years of experience in data engineering or a related field. Strong expertise in Snowflake including schema design, performance tuning, and security. Proficiency in Python for data manipulation and automation. Solid understanding of data modeling concepts (star/snowflake schema, normalization, etc.). Experience with DBT for data transformation and documentation. Hands-on experience with ETL/ELT tools and orchestration frameworks (e.g., Airflow, Prefect). Strong SQL skills and experience with large-scale data sets. Familiarity with cloud platforms (AWS, Azure, or GCP) and data services.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies