Home
Jobs
Companies
Resume

2785 Airflow Jobs - Page 50

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Designation: Solution Architect Office Location: Gurgaon Position Description: As a Technical Lead, you will be responsible for leading the development and delivery of the platforms. This includes overseeing the entire product lifecycle from the solution until execution and launch, building the right team & close collaboration with business and product teams. Primary Responsibilities: Design end-to-end solutions that meet business requirements and align with the enterprise architecture. Define the architecture blueprint, including integration, data flow, application, and infrastructure components. Evaluate and select appropriate technology stacks, tools, and frameworks. Ensure proposed solutions are scalable, maintainable, and secure. Collaborate with business and technical stakeholders to gather requirements and clarify objectives. Act as a bridge between business problems and technology solutions. Guide development teams during the execution phase to ensure solutions are implemented according to design. Identify and mitigate architectural risks and issues. Ensure compliance with architecture principles, standards, policies, and best practices. Document architectures, designs, and implementation decisions clearly and thoroughly. Identify opportunities for innovation and efficiency within existing and upcoming solutions. Conduct regular performance and code reviews, and provide feedback to the development team members to improve professional development. Lead proof-of-concept initiatives to evaluate new technologies. Functional Responsibilities: Facilitate daily stand-up meetings, sprint planning, sprint review, and retrospective meetings. Work closely with the product owner to priorities the product backlog and ensure that user stories are well-defined and ready for development. Identify and address issues or conflicts that may impact project delivery or team morale. Experience with Agile project management tools such as Jira and Trello. Required Skills: Bachelor's degree in Computer Science, Engineering, or related field. 7+ years of experience in software engineering, with at least 3 years in a solution architecture or technical leadership role. Proficiency with AWS or GCP cloud platform. Strong implementation knowledge in JS tech stack, NodeJS, ReactJS, Experience with JS stack - ReactJS, NodeJS. Experience with Database Engines - MySQL and PostgreSQL with proven knowledge of Database migrations, high throughput and low latency use cases. Experience with key-value stores like Redis, MongoDB and similar. Preferred knowledge of distributed technologies - Kafka, Spark, Trino or similar with proven experience in event-driven data pipelines. Proven experience with setting up big data pipelines to handle high volume transactions and transformations. Experience with BI tools - Looker, PowerBI, Metabase or similar. Experience with Data warehouses like BigQuery, Redshift, or similar. Familiarity with CI/CD pipelines, containerization (Docker/Kubernetes), and IaC (Terraform/CloudFormation). Good to Have: Certifications such as AWS Certified Solutions Architect, Azure Solutions Architect Expert, TOGAF, etc. Experience setting up analytical pipelines using BI tools (Looker, PowerBI, Metabase or similar) and low-level Python tools like Pandas, Numpy, PyArrow Experience with data transformation tools like DBT, SQLMesh or similar. Experience with data orchestration tools like Apache Airflow, Kestra or similar. Work Environment Details: About Affle: Affle is a global technology company with a proprietary consumer intelligence platform that delivers consumer engagement, acquisitions, and transactions through relevant Mobile Advertising. The platform aims to enhance returns on marketing investment through contextual mobile ads and also by reducing digital ad fraud. While Affle's Consumer platform is used by online & offline companies for measurable mobile advertising, its Enterprise platform helps offline companies to go online through platform-based app development, enablement of O2O commerce and through its customer data platform. Affle India successfully completed its IPO in India on 08. Aug.2019 and now trades on the stock exchanges (BSE: 542752 & NSE:AFFLE). Affle Holdings is the Singapore based promoter for Affle India and its investors include Microsoft, Bennett Coleman &Company (BCCL) amongst others. For more details: www.affle.com About BU : Ultra - Access deals, coupons, and walled gardens based user acquisition on a single platform to offer bottom-funnel optimization across multiple inventory sources. For more details, please visit: https://www.ultraplatform.io/ Show more Show less

Posted 2 weeks ago

Apply

3.0 years

4 - 9 Lacs

Bengaluru

On-site

Level Up Your Career with Zynga! At Zynga, we bring people together through the power of play. As a global leader in interactive entertainment and a proud label of Take-Two Interactive, our games have been downloaded over 6 billion times—connecting players in 175+ countries through fun, strategy, and a little friendly competition. From thrilling casino spins to epic strategy battles, mind-bending puzzles, and social word challenges, our diverse game portfolio has something for everyone. Fan-favorites and latest hits include FarmVille™, Words With Friends™, Zynga Poker™, Game of Thrones Slots Casino™, Wizard of Oz Slots™, Hit it Rich! Slots™, Wonka Slots™, Top Eleven™, Toon Blast™, Empires & Puzzles™, Merge Dragons!™, CSR Racing™, Harry Potter: Puzzles & Spells™, Match Factory™, and Color Block Jam™—plus many more! Founded in 2007 and headquartered in California, our teams span North America, Europe, and Asia, working together to craft unforgettable gaming experiences. Whether you're spinning, strategizing, matching, or competing, Zynga is where fun meets innovation—and where you can take your career to the next level. Join us and be part of the play! TBD We are seeking experienced and passionate engineers to join our collaborative and innovative team. Zynga’s mission is to “Connect the World through Games” by building a truly social experience that makes the world a better place. The ideal candidate needs to have a strong focus on building high-quality, maintainable software that has global impact. The Analytics Engineering team is responsible for all things data at Zynga. We own the full game and player data pipeline - from ingestion to storage to driving insights and analytics. As a Data Engineer, you will be responsible for the software design and development of quality services and products to support the Analytics needs of our games. In this role, you will be part of our Analytics Engineering group focusing on advanced technology developments for building scalable data infrastructure and end-to-end services which can be leveraged by the various games. We are a 120+ organization servicing 1500 others across 13 global locations. Your responsibilities will include Build and operate a multi PB-scale data platform. Design, code, and develop new features/fix bugs/enhancements to systems and data pipelines (ETLs) while adhering to the SLA. Identifying anomalies, inconsistencies in data sets and algorithms and flagging it to the relevant team and / or fixing the bugs in the data workflows where applicable. Follow the best engineering methodologies towards ensuring performance, reliability, scalability, and measurability. Collaborate effectively with teammates, contributing to an innovative environment of technical excellence. You will be a perfect fit if you have Bachelor’s degree in Computer Science, or a related technical discipline (or equivalent). 3+ years of strong data engineering design/development experience in building large-scale, distributed data platforms/products. Advanced coding expertise in SQL & Python/JVM-based language. Exposure to heterogeneous data storage systems like relational, NoSQL, in-memory etc. Knowledge of data modeling, lineage, data access and its governance. Proficient in AWS services like Redshift, Kinesis, Lambda, RDS, EKS/ECS etc. Exposure to open source software, frameworks and broader powerful technologies (Airflow, Kafka, DataHub etc). Shown ability to deliver work on time with attention to quality. Excellent written and spoken communication skills and ability to work optimally in a geographically distributed team environment. We encourage you to apply even if you don’t meet every single requirement. Your unique perspective and experience could be exactly what we’re looking for. We are proud to be an equal opportunity employer, which means we are committed to creating and celebrating diverse thoughts, cultures, and backgrounds throughout our organization. Employment with us is based on substantive ability, objective qualifications, and work ethic – not an individual’s race, creed, color, religion, sex or gender, gender identity or expression, sexual orientation, national origin or ancestry, alienage or citizenship status, physical or mental disability, pregnancy, age, genetic information, veteran status, marital status, status as a victim of domestic violence or sex offenses, reproductive health decision, or any other characteristics protected by applicable law. As an equal opportunity employer, we are committed to providing the necessary support and accommodation to qualified individuals with disabilities, health conditions, or impairments (subject to any local qualifying requirements) to ensure their full participation in the job application or interview process. Please contact us at accommodationrequest@zynga.com to request any accommodations or for support related to your application for an open position. Please be aware that Zynga does not conduct job interviews or make job offers over third-party messaging apps such as Telegram, WhatsApp, or others. Zynga also does not engage in any financial exchanges during the recruitment or onboarding process, and will never ask a candidate for their personal or financial information over an app or other unofficial chat channel. Any attempt to do so may be the result of a scamp or phishing attack, and you should not engage. Zynga’s in-house recruitment team will only contact individuals through their official Company email addresses (i.e., via a zynga.com, naturalmotion.com, smallgiantgames.com, themavens.com, gram.gs email domain).

Posted 2 weeks ago

Apply

1.0 years

2 - 7 Lacs

Bengaluru

On-site

Job Description We are currently looking to hire a highly motivated Data Scientist who has the hunger to solve our complex technical and business challenges. If you want to be part of our journey and make an impact. Apply now! YOUR ROLE AT SIXT You will build and maintain robust ETL pipelines for collecting and processing data related to pricing, competitors, and ancillary products You perform deep exploratory data analysis to uncover trends and insights You generate clean, aggregated datasets to support reporting and dashboards You will collaborate with cross-functional teams to define data requirements and deliver actionable insights You will apply basic statistical models to forecast or explain pricing and customer behaviour You create clear, concise visualizations to communicate findings to stakeholders YOUR SKILLS MATTER B.Tech/B.E/ Master’s Degree in Computer Science or similar discipline You have 1-3 years of relevant experience in data engineering or data science Programming : Proficiency in Python and Pandas for data manipulation and analysis ETL Development : Experience designing and implementing ETL pipelines, including data cleaning, aggregation, and transformation Workflow Orchestration : Hands-on with Airflow for scheduling and monitoring ETL jobs Cloud & Serverless Computing : Exposure to AWS services such as Batch, Fargate, and Lambda for scalable data processing Containerization : Familiarity with Docker for building and deploying reproducible environments EDA & Visualization : Strong exploratory data analysis skills and ability to communicate insights using data visualization libraries (e.g., Matplotlib, Seaborn, Plotly) Basic Predictive Modelling : Understanding of foundational machine learning techniques for inference and reporting Good communication skills WHAT WE OFFER Cutting-Edge Tech: You Will be part of a dynamic tech-driven environment where innovation meets impact! We offer exciting challenges, cutting-edge technologies, and the opportunity to work with brilliant minds Competitive Compensation: A market-leading salary with performance-based rewards Comprehensive Benefits: Health insurance, wellness programs, and generous leave policies Flexibility & Work-Life Balance: Our culture fosters continuous learning, collaboration, and flexibility, ensuring you grow while making a real difference. Hybrid Work policies Additional Information About the department: Engineers take note: cutting edge technology is waiting for you! We don't buy, we primarily do it all ourselves: all core systems, whether in the area of car sharing, car rental, ride hailing and much more, are developed and operated by SIXT itself. Our technical scope ranges from cloud and on-site operations through agile software development. We rely on state-of-the-art frameworks and architectures and strive for a long-term technical approach. Exciting? Then apply now! About us: We are a leading global mobility service provider with sales of €3.07 billion and around 9,000 employees worldwide. Our mobility platform ONE combines our products SIXT rent (car rental), SIXT share (car sharing), SIXT ride (cab, driver and chauffeur services), SIXT+ (car subscription) and gives our customers access to our fleet of 222,000 vehicles, the services of 1,500 cooperation partners and around 1.5 million drivers worldwide. Together with our franchise partners, we are present in more than 110 countries at 2,098 rental stations. At SIXT, a first-class customer experience and outstanding customer service are our top priorities. We focus on true entrepreneurship and long-term stability and align our corporate strategy with foresight. Want to take off with us and revolutionize the world of mobility? Apply now!

Posted 2 weeks ago

Apply

7.0 - 8.0 years

3 - 9 Lacs

Panvel

On-site

Further your career at Ball, a world leader in manufacturing sustainable aluminium packaging. Achieve extraordinary things when you join our team, and make a difference in your professional development, the community, and around the globe! Job Summary The Responsibility of shift electrical engineer is to ensure the Smooth operation of Process Equipment and ancillaries in the plant in his shift. The Shift Electrical engineer is the link between Engineering function and his respective shift operating team. He will drive all the engineering team KPI’s in his shift, ensure the optimum uptime in his shift by actively attending breakdowns. He will make a report / action plan for avoiding such failures in future. He will also ensure the utilities and process Equipment are maintained in his shift by doing proper daily and periodical maintenance – Both Planned and predictive in nature. He will give inputs to engineering team and spareparts stores for planning of maintenance and spares based on the machine conditions. Key Responsibilities Carry out scheduled and unscheduled maintenance tasks, visual maintenance, repairs, preventive maintenance, etc. as directed by the senior or any other official including but not limited to from Production Dept. Follow the preventive maintenance schedule. Record all readings correctly and accurately. Ensure effective utilization of tools and tackles and prevent misuse of same. Always use safety equipment and tools whenever and wherever required Attend breakdowns independently, Carry out modification work, and do scheduled and unscheduled preventive maintenance to help production process. Troubleshoot Electrical faults / problems and take corrective actions in process equipment and ancillaries in his shift. Achieve target of electrical down time and breakdowns. Modify electrical / PLC Programs after approvals from immediate supervisor. Can Read machine wiring Diagrams; Can Read Ladder Diagrams; can do Troubleshooting and fault finding in his shift. Hands on Knowledge in Allen Bradley Control Logix PLC, Panel views; Servo Drives; Allen Bradley drives with Ethernet communication, Knowledge about Device Net, can do troubleshooting with these devices and programs. Modify / create AutoCAD drawings. Follow and maintain organization’s SOPs and safe working practices. Comply with any legal, environmental and safety , BRC requirements and/or checks Actively participate in companies Continuous Improvement initiatives e.g. Lean Manufacturing, Six Sigma, SMED, 5 S, ISO, etc. To conduct lean manufacturing activities and participate in audits as per company requirements. To meet KPI of various BPEMEA strategy Balance Score Card and always look for improvement in it. Help to maximize production and minimize spoilage during the manufacturing process by participating in all shift meetings and shift activities and take necessary actions on time to eliminate losses. Maintain housekeeping to the standard defined in the Housekeeping and Hygiene procedure. Work on Gas Fired equipment (Boiler, Ovens) and carry out safety checks, Burner service, and airflow balancing. Inspect the machine safeties, process stops in periodical basis, carry out the set up checks that may follow maintenance work. Ensure that all supplying equipment, electrical installations etc. are checked and corrected to company standards, periodically and record is maintained Complete the jobs within stipulated time as directed by senior and report back. Shall be responsible for saving maintenance cost. Guiding engineering team and spareparts stores manager on the requirement of spares etc. on time to avoid breakdown and to minimize machine downtime. Have effective communications with seniors and fellow workmen, reporting defects, manufacturing difficulties, ideas for improvements, etc. Attend any type electrical jobs to support the production activities. Support other department as may be directed by senior / HOD. Education Diploma /Degree in Electrical engineering or Industrial electronics Previous Experience 7 to 8 years’ experience in a manufacturing background with advanced Allen Bradley PLC’s programming and drives programming in high speed continues manufacturing industry in maintenance department. Good experience as a troubleshooter in electrical / electronics. Successful track record in both electrical and mechanical maintenance. Hands on experience is must, Must be able to work independently in shifts with least supervision. Hands on experience in troubleshooting machine problems with help of PLC programming. Hands on experience in troubleshooting machine problems with help of electrical drawings. Good Experience of all the electrical equipment. E.g. motors, sensors, Timers, P.I.D controllers, thermocouples, servomotors, potentiometers. He should be able to read and draw electrical drawings and plc ladder diagrams. Good knowledge P.C.C, M.C.C and P.D.B., Hands on experience in Transformers, H.T and L.T side, Various circuit Breakers, Protection Relays Skills & Competencies required Computer literacy and the ability to generate reports. Ability to read, understand and develop engineering drawings. Electrical, PLC RS-logix500 & 5000, PV1000/1500 and VFD Allen Bradley knowledge is must. Advanced level knowledge of programming. Trouble shooting of electrical faults independently with and without electrical drawings. He should be able to distinguish between electrical and mechanical problems. Also he should be able to make changes in the equipment as per the operations requirements. Knowledge of electrical power saving calculations and concepts, should have worked on energy conservation projects. Hands on experience to develop electrical panels and electrical circuit will be an added advantage. Knowledge / experience of ovens, gas burners, Vaporizer will be added advantage. Ball Corporation is proud to be an Equal Opportunity Employer. We actively encourage applications from everybody. All qualified job applicants will receive consideration without regard to race, color, religion, creed, national origin, aboriginality, genetic information, ancestry, marital status, sex, sexual orientation, gender identity or expression, physical or mental disability, pregnancy, veteran status, age, political affiliation or any other non-merit characteristic. When you join Ball you belong to a team of over 16,000 members worldwide. Our products range from infinitely recyclable aluminium cans, cups to aerosol bottles that enable our customers to contribute to a better world. Each of us has a deep commitment to diversity and inclusion which is the foundation of our culture of belonging. Everyone at Ball is making a difference by doing what we love. Because what we create may change, but what we will always make is a difference. Please note the advertised job title might vary from the job title on the contract due to local job title structure and global HR systems. No agencies please. Job Grade: Global Grade 8

Posted 2 weeks ago

Apply

0 years

4 - 7 Lacs

Noida

On-site

JOB DESCRIPTION About Times Internet At Times Internet, we create premium digital products that simplify and enhance the lives of millions. As India’s largest digital products company, we have a significant presence across a wide range of categories, including News, Sports, Fintech, and Enterprise solutions.Our portfolio features market-leading and iconic brands such as TOI, ET, NBT, Cricbuzz, Times Prime, Times Card, Indiatimes, Whatshot, Abound, Willow TV, Techgig and Times Mobile among many more. Each of these products is crafted to enrich your experiences and bring you closer to your interests and aspirations. As an equal opportunity employer, Times Internet strongly promotes inclusivity and diversity. We are proud to have achieved overall gender pay parity in 2018, verified by an independent audit conducted by Aon Hewitt. We are driven by the excitement of new possibilities and are committed to bringing innovative products, ideas, and technologies to help people make the most of every day. Join us and take us to the next level! About the Business Unit (across Colombia) Colombia is a contextual AI-driven AdTech platform that revolutionizes digital advertising with cutting-edge technology. By utilizing advanced machine learning and AI, Colombia delivers highly targeted and personalized ad experiences, ensuring optimal engagement and performance. Our platform seamlessly integrates with publishers and advertisers, offering robust solutions for display, video, and native advertising. With over 650 million unique users monthly, Colombia leverages data-driven insights to provide unmatched reach and engagement. As part of India’s largest premium content network, we ensure high-quality user interaction and engagement. With over 15 billion recommendations per month & 650 Mn + unique users, we guarantee massive audience reach. Utilizing AI & ML models for lookalike audience creation, bid prediction, and contextual targeting, Colombia enhances campaign effectiveness and user engagement. Our AI-powered user segmentation and article annotation streamline processes, delivering more accurate and relevant user experiences. Experience the future of advertising with Colombia... About the Role: We are looking for a skilled AdTech QA Specialist to support our operations across ad platforms, reporting pipelines, and performance dashboards. This is a versatile technical role involving API integrations, crawler development, and automation of reporting flows across platforms like Google Ad Manager (GAM), Google Ads, Google Analytics, and custom data sources. The ideal candidate is someone who can think like a developer and act like a problem-solver—comfortable writing python script that scales, automates, and simplifies recurring tasks across our ad operations and analytics processes. Key Responsibilities Develop Python scripts and tools for automating reports, monitoring KPIs, and fetching campaign data from various platforms. Write and maintain web crawlers to fetch information from web pages and ad delivery endpoints. Create data pipelines that process, store, and sync data with Google Sheets, BigQuery, Dashboards, and internal tools. Automate performance dashboards, blending data from multiple sources using APIs or Apps Script. Debug and resolve API errors or data discrepancies in reports. Implement scheduling and error-handling mechanisms for daily/weekly tasks using tools like cron, Airflow, or custom schedulers. Work closely with internal stakeholders to understand campaign KPIs and deliver automated insights. Skills & Requirements 1 + yrs of Strong programming skills in Python (mandatory). Proficiency in API integrations and handling REST, SOAP, or GraphQL APIs. Hands-on experience with data manipulation libraries and Google APIs (Sheets, Drive, Gmail). Familiarity with web scraping tools. Ability to write modular, reusable, and well-documented scripts. Strong troubleshooting skills and attention to detail. Bonus Skills Experience with Google Apps Script for spreadsheet and email automation. Exposure to SQL, BigQuery, or other cloud databases. Knowledge of cron jobs, Linux-based scripting, or CI/CD pipelines. Familiarity with ad tech KPIs: CPM, CTR, viewability, IVT, pacing, etc.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Looking for 7+ years of experience Senior Data engineers/ Data Architects Location: Chennai/Hyderabad Notice Period: Immediate to 30 days (ONLY) Mandate Key skills: AWS, Databricks, Python, Pyspark, SQL 1. Data Pipeline Development: Design, build, and maintain scalable data pipelines for ingesting, processing, and transforming large datasets from diverse sources into usable formats. 2. Data Integration and Transformation: Integrate data from multiple sources, ensuring data is accurately transformed and stored in optimal formats (e.g., Delta Lake, Redshift, S3). 3. Performance Optimization: Optimize data processing and storage systems for cost efficiency and high performance, including managing compute resources and cluster configurations. 4. Automation and Workflow Management: Automate data workflows using tools like Airflow, Databricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. 5. Data Quality and Validation: Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. 6. Cloud Platform Management: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. 7. Migration and Upgrades: Lead migrations from legacy data systems to modern cloud-based platforms, ensuring smooth transitions and enhanced scalability. 8. Cost Optimization: Implement strategies for reducing cloud infrastructure costs, such as optimising resource usage, setting up lifecycle policies, and automating cost alerts. 9. Data Security and Compliance : Ensure secure access to data by implementing IAM roles and policies, adhering to data security best practices, and enforcing compliance with organizational standards. 10. Collaboration and Support: Work closely with data scientists, analysts, and business teams to understand data requirements and provide support for data-related tasks. Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Only Immediate Joiners Location - Noida / Hyderabad Data Architect – Telecom Domain About the Role: We are seeking an experienced Telecom Data Architect to join our team. In this role, you will be responsible for designing comprehensive data architecture and technical solutions specifically for telecommunications industry challenges. You will work closely with customers, and technology partners to deliver data solutions that address complex telecommunications business requirements including customer experience management, network optimization, revenue assurance, and digital transformation initiatives. Responsibilities: Design and articulate enterprise-scale telecom data architectures Develop comprehensive data models aligned for telecommunications domains such as Customer, Product, Service, Resource, and Partner management Create data architectures that support telecom-specific use cases including customer journey analytics, network performance optimization, fraud detection, and revenue assurance Design solutions leveraging Microsoft Azure and Databricks for telecom data processing and analytics Conduct technical discovery sessions with telecom clients to understand their OSS/BSS architecture, network analytics needs, customer experience requirements, and digital transformation objectives Design and deliver proof of concepts (POCs) and technical demonstrations showcasing modern data platforms solving real-world telecommunications challenges Create comprehensive architectural diagrams and implementation roadmaps for telecom data ecosystems spanning cloud, on-premises, and hybrid environments Evaluate and recommend appropriate big data technologies, cloud platforms, and processing frameworks based on telecom-specific requirements and regulatory compliance needs. Design data governance frameworks compliant with telecom industry standards and regulatory requirements (GDPR, data localization, etc.) Stay current with the latest advancements in data technologies including cloud services, data processing frameworks, and AI/ML capabilities Contribute to the development of best practices, reference architectures, and reusable solution components for accelerating proposal development Qualifications: Bachelor's or Master's degree in Computer Science, Telecommunications Engineering, Data Science, or a related technical field 10+ years of experience in data architecture, data engineering, or solution architecture roles with at least 5 years in telecommunications industry Demonstrated ability to estimate project efforts, resource requirements, and implementation timelines for complex telecom data initiatives Strong understanding of telecom OSS/BSS systems, network management, customer experience management, and revenue management domains Hands-on experience with data platforms including Databricks, and Microsoft Azure in telecommunications contexts Experience with modern data processing frameworks such as Apache Kafka, Spark and Airflow for real-time telecom data streaming Proficiency in Azure cloud platform and its respective data services with an understanding of telecom-specific deployment requirements Knowledge of system monitoring and observability tools for telecommunications data infrastructure Experience implementing automated testing frameworks for telecom data platforms and pipelines Familiarity with telecom data integration patterns, ETL/ELT processes, and data governance practices specific to telecommunications Experience designing and implementing data lakes, data warehouses, and machine learning pipelines for telecom use cases Proficiency in programming languages commonly used in data processing (Python, Scala, SQL) with telecom domain applications Understanding of telecommunications regulatory requirements and data privacy compliance (GDPR, local data protection laws) Excellent communication and presentation skills with ability to explain complex technical concepts to telecom stakeholders Strong problem-solving skills and ability to think creatively to address telecommunications industry challenges Relevant data platform certifications such as Databricks, Azure Data Engineer are a plus Willingness to travel as required Show more Show less

Posted 2 weeks ago

Apply

6.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Summary: We are seeking a skilled Big Data Tester & Developer to design, develop, and validate data pipelines and applications on large-scale data platforms. You will work on data ingestion, transformation, and testing workflows using tools from the Hadoop ecosystem and modern data engineering stacks. Experience - 6-12 years Key Responsibilities: Develop and test Big Data pipelines using Spark, Hive, Hadoop, and Kafka Write and optimize PySpark/Scala code for data processing Design test cases for data validation, quality, and integrity Automate testing using Python/Java and tools like Apache Nifi, Airflow, or DBT Collaborate with data engineers, analysts, and QA teams ⸻ Key Skills: Strong hands-on experience in Big Data tools: Spark, Hive, HDFS, Kafka Proficient in PySpark, Scala, or Java Experience in data testing, ETL validation, and data quality checks Familiarity with SQL, NoSQL, and data lakes Knowledge of CI/CD, Git, and automation frameworks We are looking for a skilled PostgreSQL Developer/DBA to design, implement, optimize, and maintain our PostgreSQL database systems. You will work closely with developers and data teams to ensure high performance, scalability, and data integrity. Key Responsibilities: Experience - 6 to 12 years Develop complex SQL queries, stored procedures, and functions Optimize query performance and database indexing Manage backups, replication, and security Monitor and tune database performance Support schema design and data migrations Key Skills: Strong hands-on experience with PostgreSQL Proficient in SQL, PL/pgSQL scripting Experience in performance tuning, query optimization, and indexing Familiarity with logical replication, partitioning, and extensions Exposure to tools like pgAdmin, psql, or PgBouncer Show more Show less

Posted 2 weeks ago

Apply

3.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Data Engineer Location: Chennai Experience Level: 3-6 Years Employment Type: Full-time About Us: SuperOps is a SaaS start-up empowering IT service providers and IT teams around the world with technology that is cutting-edge, future-ready, and powered by AI. We are backed by marquee investors like Addition, March Capital, Matrix Partners India, Elevation Capital, and Tanglin Venture Partners. Founded by Arvind Parthiban, a serial entrepreneur, and Jayakumar Karumbasalam, a veteran in the IT space, SuperOps is built on the back of a team of engineers, product architects, designers, and AI experts, who want to reshape the world of IT. Now we have taken on a market that is plagued by legacy solutions and subpar experiences. The potential to do something great is immense. So if you love to grow, be part of a kickass team that inspires you to do more, and make an everlasting mark in the world of IT, SuperOps is the place to be. We also believe that the journey is as important as the destination. We want to build the best products out there and have fun while doing so. So come, and be part of our A-star team of superheroes. We are looking for a talented Senior Front-End Engineer to join our engineering team. As a senior member of our team, you will be responsible for creating responsive, efficient, and engaging user interfaces for our platform. Role Summary: We are seeking a skilled and motivated Data Engineer to join our growing team. In this role, you will be instrumental in designing, building, and maintaining our data infrastructure, ensuring that reliable and timely data is available for analysis across the organization. You will work closely with various teams to integrate data from diverse sources and transform it into actionable insights that drive our business forward. Key Responsibilities: Design, develop, and maintain scalable and robust data pipelines to ingest data from various sources, including CRM systems (e.g., Salesforce), Billing platforms, Product Analytics tools (e.g., Mixpanel, Amplitude), and Marketing platforms (e.g., Google Ads, Hubspot). Build, manage, and optimize our data warehouse to serve as the central repository for all business-critical data. Implement and manage efficient data synchronization processes between source systems and the data warehouse. Oversee the storage and management of raw data, ensuring data integrity and accessibility. Develop and maintain data transformation pipelines (ETL/ELT) to process raw data into clean, structured formats suitable for analytics, reporting, and dashboarding. Ensure seamless synchronization and consistency between raw and processed data layers. Collaborate with data analysts, product managers, and other stakeholders to understand data needs and deliver appropriate data solutions. Monitor data pipeline performance, troubleshoot issues, and implement improvements for efficiency and reliability. Document data processes, architectures, and definitions. Qualifications: Proven experience as a Data Engineer for 5 to 8 years of experience Strong experience in designing, building, and maintaining data pipelines and ETL/ELT processes. Proficiency with data warehousing concepts and technologies (e.g., BigQuery, Redshift, Snowflake, Databricks). Experience integrating data from various APIs and databases (SQL, NoSQL). Solid understanding of data modeling principles. Proficiency in programming languages commonly used in data engineering (e.g., Python, SQL). Experience with workflow orchestration tools (e.g., Airflow, Prefect, Dagster). Familiarity with cloud platforms (e.g., AWS, GCP, Azure). Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Bonus Points: Experience working in a SaaS company. Understanding of key SaaS business metrics (e.g., MRR, ARR, Churn, LTV, CAC). Experience with data visualization tools (e.g., Tableau, Looker, Power BI). Familiarity with containerization technologies (e.g., Docker, Kubernetes). Why Join Us? Impact: You'll work on a product that is revolutionising IT service management for MSPs and IT teams worldwide. Growth: SuperOps is growing rapidly, and there are ample opportunities for career progression and leadership roles. Collaboration: Work with talented engineers, designers, and product managers in a supportive and innovative environment Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Jaipur, Rajasthan, India

Remote

Linkedin logo

Job Summary Auriga is looking for a Data Engineer to design and maintain cloud-native data pipelines supporting real-time analytics and machine learning. You'll work with cross-functional teams to build scalable, secure data solutions using GCP (BigQuery, Looker), SQL, Python, and orchestration tools like Dagster and DBT. Mentoring junior engineers and ensuring data best practices will also be part of your role. WHAT YOU'LL DO: Design, build, and maintain scalable data pipelines and architectures to support analytical and operational workloads. Develop and optimize ETL/ELT pipelines, ensuring efficient data extraction, transformation, and loading from various sources. Work closely with backend and platform engineers to integrate data pipelines into cloud-native applications. Manage and optimize cloud data warehouses, primarily BigQuery, ensuring performance, scalability, and cost efficiency. Implement data governance, security, and privacy best practices, ensuring compliance with company policies and regulations. Collaborate with analytics teams to define data models and enable self-service reporting and BI capabilities. Develop and maintain data documentation, including data dictionaries, lineage tracking, and metadata management. Monitor, troubleshoot, and optimize data pipelines, ensuring high availability and reliability. Stay up to date with emerging data engineering technologies and best practices, continuously improving our data infrastructure. WHAT WE'RE LOOKING FOR: Strong proficiency in English (written and verbal communication) is required. Experience working with remote teams across North America and Latin America, ensuring smooth collaboration across time zones. 5+ years of experience in data engineering, with expertise in building scalable data pipelines and cloud-native data architectures. Strong proficiency in SQL for data modeling, transformation, and performance optimization. Experience with BI and data visualization tools (e.g., Looker, Tableau, or Google Data Studio). Expertise in Python for data processing, automation, and pipeline development. Experience with cloud data platforms, particularly Google Cloud Platform (GCP).Hands-on experience with Google BigQuery, Cloud Storage, and Pub/Sub. Strong knowledge of ETL/ELT frameworks such as DBT, Dataflow, or Apache Beam. Familiarity with workflow orchestration tools like Dagster, Apache Airflow or Google Cloud Workflows. Understanding of data privacy, security, and compliance best practices. Strong problem-solving skills, with the ability to debug and optimize complex data workflows. Excellent communication and collaboration skills.  NICE TO HAVE: Experience with real-time data streaming solutions (e.g., Kafka, Pub/Sub, or Kinesis). Familiarity with machine learning workflows and MLOps best practices. Knowledge of Terraform for Infrastructure as Code (IaC) in data environments. Familiarity with data integrations involving Contentful, Algolia, Segment, and Talon.One .  About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In! Our Website Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Title: Data Scientist Location: Bangalore Reporting to: Manager- Analytics/ Senior Manager-Analytics 1. Purpose of the role Contributing to the Data Science efforts of AB InBevʼs global non-commercial analytics capability of Supply Analytics. Candidate will be required to contribute and may also need to guide the DS team staffed on the area and assess the effortsrequired to scaleand standardize the use of Data Scienceacross multiple ABI markets 2. KEY TASKS AND ACCOUNTABILITIES Understand the business problem and translate that to an analytical problem; participate in the solution design process. Manage the full AI/ML lifecycle, including data preprocessing, feature engineering, model training, validation, deployment, and monitoring. Develop reusable and modular Python code adhering to OOP (Object-Oriented Programming) principles. Design, develop, and deploy machine learning models into production environments on Azure. Collaborate with data scientists, software engineers, and other stakeholders to meet business needs. Ability to communicate findings clearly to both technical and business stakeholders. 3. Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) B.Tech /BE/ Masters in CS/IS/AI/ML Previous work experience required Minimum 3 years of relevant experience Technical skills required Must Have Strong expertise in Python, including advanced knowledge of OOP concepts. Exposure to AI/ML methodologies with a previous hands-on experience in ML concepts like forecasting, clustering, regression, classification, optimization using Python Azure Tech Stack, Databricks, ML Flow in any cloud platform Airflow for orchestrating and automating workflows MLOPS concepts and containerization tools like Docker Experience with version control tools such as Git. Consistently display an intent for problem solving Strong communication skills (vocal and written) Ability to effectively communicate and present information at various levels of an organization. Good To Have Preferred industry exposure in Manufacturing Domain Product building experience Other Skills required Passion for solving problems using data Detail oriented, analytical and inquisitive Ability to learn on the go Ability to work independently and with others We dream big to create future with more cheers Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Responsibilities Create, implement and operate the strategy for robust and scalable data pipelines for business intelligence and machine learning. Develop and maintain core data framework and key infrastructures Create and support the ETL pipeline to get the data flowing correctly from the existing and new sources to our data warehouse. Data Warehouse design and data modeling for efficient and cost-effective reporting Collaborate with data analysts, data scientists, and other data consumers within the business to manage the data warehouse table structure and optimize it for reporting. Constantly striving to improve software development process and team productivity Define and implement Data Governance processes related to data discovery, lineage, access control and quality assurance Perform code reviews and QA data imported by various processes Qualifications 3-5 years of experience. At least 2+ years of experience in data engineering and data infrastructure space on any of the big data technologies: Hive, Spark, Pyspark(Batch and Streaming), Airflow, Redshift and Delta Lake. Experience in product-based companies or startups. Strong understanding of data warehousing concepts and the data ecosystem. Strong Design/Architecture experience architecting, developing, and maintaining solutions in AWS. Experience building data pipelines and managing the pipelines after they’re deployed. Experience with building data pipelines from business applications using APIs. Previous experience in Databricks is a big plus. Understanding of Dev Ops would be preferable though not a must Working knowledge of BI Tools like Metabase, and Power BI is a plus Experience of architecting systems for data access is a major plus. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

JOB ID - [URGENTLY REQUIRED] Senior BigQuery Developer (Google Cloud Platform) - Job Code 20250601/HYD/LNKD/FIS-YAHYA Job Title: Senior : BigQuery Developer (Google Cloud Platform) Experience : 5-8 Yrs Location: Hyderabad Job Type: Full-Time About the Role: We are seeking a highly skilled and experienced Senior BigQuery Developer with deep expertise in Google Cloud Platform (GCP). The ideal candidate will be responsible for designing, developing, and maintaining robust, scalable data pipelines and advanced analytics solutions using BigQuery and other GCP-native services. You will work closely with data scientists, analysts, and business stakeholders to ensure efficient and secure access to enterprise data. Key responsibilities for big query developer: Design, develop, and optimize BigQuery data warehouses and data marts to support analytical and business intelligence workloads. Implement data modeling and best practices for partitioning, clustering, and table design in BigQuery. Integrate BigQuery with tools such as Dataform, Airflow, Cloud Composer, or dbt for orchestration and version control. Ensure compliance with security, privacy, and governance policies related to cloud-based data solutions. Monitor and troubleshoot data pipelines and scheduled queries for accuracy and performance. Stay up-to-date with evolving BigQuery features and GCP best practices. Benefits You Will Get Competitive Salary Package Medical Insurance Exposure to numerous domains and projects You will have a chance for professional training and certifications, all company-sponsored Clear and defined career paths for you and professional development and exposure assured in all the latest technologies. Hiring/Selection Process 1 HR Interview followed by 1 Technical Interview About Company - FIS Clouds FIS Clouds (www.fisclouds.com) is an IT Services Company, fast emerging as a global leader in digital technology and transformation solutions for enterprises ranging from Monoliths to Hybrid and Cloud transformation. We operate on a global scale, with a diverse talent base, and have our global offices in India, the US, the UK, and Jakarta, Indonesia. We at FIS Clouds strongly believe in Agility, Speed, and Quality which can be found in the DNA of our company. We apply constant innovation to solve customer challenges and increase business outcomes. FIS brings extensive experience in Cloud Technologies including, Public Cloud, Private Cloud, Multi-Cloud, Hybrid Cloud, and DevOps and Java. Additionally, FIS has a lot of experience in Data Analytics and Cloud Automation. Note: The package is no bar for the desired candidate. Earn your desired package by proving yourself. Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Tata Consultancy Services is hiring Python Full stack Developers !!! Role**Python Full stack Developer Desired Experience Range**6-8 Years Location of Requirement**Hyderabad Desired Skills -Technical/Behavioral Primary Skill Frontend o 6+ years of overall experience with proficiency in React (2+ years), Typescript (1+ year), React hooks (1+ year) o Experience with ESlint, CSS in JS styling (preferably Emotion), state management (preferably Redux), and JavaScript bundlers such as Webpack o Experience with integrating with RESTful APIs or other web services Backend o Expertise with Python (3+ years, preferably Python3) o Proficiency with a Python web framework (2+ years, preferably flask and FastAPI) o Experience with a Python linter (preferably flake8), graph databases (preferably Neo4j), a package manager (preferably pip), Elasticsearch, and Airflow o Experience with developing microservices, RESTful APIs or other web services o Experience with Database design and management, including NoSQL/RDBMS tradeoffs Interested and Eligible candidates can apply Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

New Delhi, Delhi, India

On-site

Linkedin logo

Job Summary: We are looking for a skilled and motivated Data Engineer to join our growing data team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support analytics, reporting, and machine learning initiatives. You will work closely with data analysts, data scientists, and software engineers to ensure reliable access to high-quality data across the organization. Key Responsibilities: Design, develop, and maintain robust and scalable data pipelines and ETL/ELT processes. Build and optimize data architectures to support data warehousing, batch processing, and real-time data streaming. Collaborate with data scientists, analysts, and other engineers to deliver high-impact data solutions. Ensure data quality, consistency, and security across all systems. Manage and monitor data workflows to ensure high availability and performance. Develop tools and frameworks to automate data ingestion, transformation, and validation. Participate in data modeling and architecture discussions for both transactional and analytical systems. Maintain documentation of data flows, architecture, and related processes. Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or related field. Strong programming skills in Python, Java, or Scala. Proficient in SQL and experience working with relational databases (e.g., PostgreSQL, MySQL). Experience with big data tools and frameworks (e.g., Hadoop, Spark, Kafka). Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and services like S3, Redshift, BigQuery, or Azure Data Lake. Hands-on experience with data pipeline orchestration tools (e.g., Airflow, Luigi). Experience with data warehousing and data modeling best practices. Preferred Qualifications: Experience with CI/CD for data pipelines. Knowledge of containerization and orchestration tools like Docker and Kubernetes. Experience with real-time data processing technologies (e.g., Apache Flink, Kinesis). Familiarity with data governance and security practices. Exposure to machine learning pipelines is a plus. Show more Show less

Posted 2 weeks ago

Apply

5.0 - 6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Responsibilities / Qualifications Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Glue Data Catalog, Lake formation, Apache Airflow, Lambda, etc Experience with development of data governance framework including the management of data, operating model, data policies and standards. Experience with orchestration of workflows in an enterprise environment. Working experience with Agile Methodology Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Role :- Data Architect J ob Location: - Pune Years of experience :– 10+ yrs Mandatory skills:- Python, Apache Spark, Apache Airflow, AWS (S3 or Glue or Lambda) along with design or architecture. Job Summary – Required Skills: • Proficiency in multiple programming languages - ideally Python • Proficiency in at least one cluster computing frameworks (preferably Spark, alternatively Flink or Storm) • Proficiency in at least one cloud data lakehouse platforms (preferably AWS data lake services or Databricks, alternatively Hadoop), at least one relational data stores (Postgres, Oracle or similar) and at least one NOSQL data stores (Cassandra, Dynamo, MongoDB or similar) • Proficiency in at least one scheduling/orchestration tools (preferably Airflow, alternatively AWS Step Functions or similar) • Proficiency with data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), big-data storage formats (Parquet, Iceberg, or similar), data processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc.), Agile methodology (develop PI plans and roadmaps), TDD (or BDD) and CI/CD tools (Jenkins, Git,) • Strong organizational, problem-solving and critical thinking skills; Strong documentation skills Preferred skills: • Proficiency in IaC (preferably Terraform, alternatively AWS cloud formation) Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. About The Team Come be a part of something big. If you want to be a part of building something big that will drive value throughout the entire global organization, then this is the opportunity for you. You will be working on top priority initiatives that span new and existing technologies - all to deliver outstanding results and experiences for our customers and employees. The Enterprise Data Services organization in Business Technology takes pride in enabling data driven business outcomes to spearhead Workday’s growth through trusted data excellence, innovation and architecture thought leadership. Our organization is responsible for developing and supporting Data Warehousing, Data Ingestion and Integration Services, Master Data Management (MDM), Data Quality Assurance, and the deployment of cutting-edge Advanced Analytics and Machine Learning solutions tailored to enhance multiple business sectors such as Sales, Marketing, Services, Support, and Customer Engagement. Our team harnesses the power of top-tier modern cloud platforms and services, including AWS, Databricks, Snowflake, Reltio, Tableau, Snaplogic, and MongoDB, complemented by a suite of AWS-native technologies like Spark, Airflow, Redshift, Sagemaker, and Kafka. These tools are pivotal in our drive to create robust data ecosystems that empower our business operations with precision and scalability. EDS is a global team distributed across the U.S, India and Canada. About The Role Join a pioneering organization at the forefront of technological advancement, dedicated to demonstrating data-driven insights to transform industries and drive innovation. We are actively seeking a skilled Data Platform and Support Engineer who will play a pivotal role in ensuring the smooth functioning of our data infrastructure, enabling self-service analytics, and empowering analytical teams across the organization. As a Data Platform and Support Engineer, you will oversee the management of our enterprise data hub, working alongside a team of dedicated data and software engineers to build and maintain a robust data ecosystem that drives decision-making at scale for internal analytical applications. You will play a key role in ensuring the availability, reliability, and performance of our data infrastructure and systems. You will be responsible for monitoring, maintaining, and optimizing data systems, providing technical support, and implementing proactive measures to enhance data quality and integrity. This role requires advanced technical expertise, problem-solving skills, and a strong commitment to delivering high-quality support services. The team is responsible for supporting Data Services, Data Warehouse, Analytics, Data Quality and Advanced Analytics/ML for multiple business functions including Sales, Marketing, Services, Support and Customer Experience. We demonstrate leading modern cloud platforms like AWS, Reltio, Snowflake,Tableau, Snaplogic, MongoDB in addition to the native AWS technologies like Spark, Airflow, Redshift, Sagemaker and Kafka. Job Responsibilities : Monitor the health and performance of data systems, including databases, data warehouses, and data lakes. Conduct root cause analysis and implement corrective actions to prevent recurrence of issues. Manage and optimize data infrastructure components such as servers, storage systems, and cloud services. Develop and implement data quality checks, validation rules, and data cleansing procedures. Implement security controls and compliance measures to protect sensitive data and ensure regulatory compliance. Design and implement data backup and recovery strategies to safeguard data against loss or corruption. Optimize the performance of data systems and processes by tuning queries, optimizing storage, and improving ETL pipeline efficiency. Maintain comprehensive documentation, runbooks, and fix guides for data systems and processes. Collaborate with multi-functional teams, including data engineers, data scientists, business analysts, and IT operations. Lead or participate in data-related projects, such as system migrations, upgrades, or expansions. Deliver training and mentorship to junior team members, sharing knowledge and standard methodologies to support their professional development. Participate in rotational shifts, including on-call rotations and coverage during weekends and holidays as required, to provide 24/7 support for data systems, responding to and resolving data-related incidents in a timely manner Hands-on experience with source version control, continuous integration and experience with release/organizational change delivery tools. About You Basic Qualifications: 6+ years of experience designing and building scalable and robust data pipelines to enable data-driven decisions for the business. BE/Masters in computer science or equivalent is required Other Qualifications: Prior experience with CRM systems (e.g. Salesforce) is desirable Experience building analytical solutions to Sales and Marketing teams. Should have experience working on Snowflake ,Fivetran DBT and Airflow Experience with very large-scale data warehouse and data engineering projects. Experience developing low latency data processing solutions like AWS Kinesis, Kafka, Spark Stream processing. Should be proficient in writing advanced SQLs, Expertise in performance tuning of SQLs Experience working with AWS data technologies like S3, EMR, Lambda, DynamoDB, Redshift etc. Solid experience in one or more programming languages for processing of large data sets, such as Python, Scala. Ability to create data models, STAR schemas for data consuming. Extensive experience in troubleshooting data issues, analyzing end to end data pipelines and working with users in resolving issues Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process! , Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 - 0 Lacs

Thane, Maharashtra, India

Remote

Linkedin logo

Experience : 3.00 + years Salary : USD 18000-30000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Steer Health) What do you need for this opportunity? Must have skills required: Airflow, Kubeflow, LangChain, RAGFlow, TensorFlow, Dialogflow, FastAPI, LLMs, Pytorch, Python Steer Health is Looking for: About The Role Steer Health is seeking a talented **Backend Engineer** with expertise in AI/ML and healthcare technologies to design and implement **AgenticAI workflows** that redefine clinical and operational processes. You’ll build scalable backend systems that integrate FHIR-compliant APIs, LLM-driven automation, and conversational AI to solve real-world healthcare challenges. If you’re passionate about Python, AI workflows, and making a tangible impact in healthcare, this role is for you. Key Responsibilities FastAPI to enable seamless data exchange across EHRs, patient portals, and AI agents. Architect AI-driven workflows using tools like RAGFlow or similar platforms to automate tasks such as clinical documentation, prior authorization, and patient triage. Develop and fine-tune LLM-based solutions (e.g., GPT, Claude) with PyTorch, focusing on healthcare-specific use cases like diagnosis support or patient communication. Integrate Dialogflow for conversational AI agents that power chatbots, voice assistants, and virtual health aides. Collaborate on prompt engineering to optimize LLM outputs for accuracy, compliance, and clinical relevance. Optimize backend systems for performance, scalability, and security in HIPAA-compliant environments. Partner with cross-functional teams (data scientists, product managers, clinicians) to translate healthcare needs into technical solutions. Qualifications 3+ years of backend engineering experience, with expertise in Python and frameworks like FastAPI or Flask. Hands-on experience with **PyTorch/TensorFlow** and deploying ML models in production. Familiarity with AI workflow tools (e.g., RAGFlow, Airflow, Kubeflow) and orchestration of LLM pipelines. Experience integrating Dialogflow or similar platforms for conversational AI. Strong understanding of LLMs (training, fine-tuning, and deployment) and prompt engineering best practices. Knowledge of cloud platforms (AWS/GCP/Azure) and containerization (Docker, Kubernetes). Passion for healthcare innovation and improving patient/provider experiences. Preferred Qualifications Experience in healthcare tech (EHR integrations, HIPAA compliance, HL7/FHIR). Contributions to open-source AI/healthcare projects. Familiarity with **LangChain**, **LlamaIndex**, or agentic workflow frameworks. Why Join Steer Health? Impact: Your work will directly enhance healthcare delivery for millions of patients. Innovation: Build with the latest AI/ML tools in a fast-paced, forward-thinking environment. Growth: Lead projects at the intersection of AI and healthcare, with opportunities for advancement. Culture: Collaborative, mission-driven team with flexible work policies. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

3.0 - 6.0 years

9 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

3 +yrs of exp as Data Engineer Exp in AWS Cloud Services, EC2, S3, IAM Exp on AWS Glue, DMS, RDBMS, MPP Databases like Snowflake, Redshift Knowledge on Data Modelling, ETL Process This role will be 5 days WFO. Plz apply only if you are open to work from office Only immediate joiners required

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Gracenote is the content business unit of Nielsen that powers the world of media entertainment. Our metadata solutions help media and entertainment companies around the world deliver personalized content search and discovery, connecting audiences with the content they love. We’re at the intersection of people and media entertainment. With our cutting-edge technology and solutions, we help audiences easily find TV shows, movies, music and sports across multiple platforms. As the world leader in entertainment data and services, we power the world’s top streaming platforms, cable and satellite TV providers, media companies, consumer electronics manufacturers, music services and automakers to navigate and succeed in the competitive streaming world. Our metadata entertainment solutions have a global footprint of 80+ countries, 100K+ channels and catalogs, 70+ sports and 100M+ music tracks, all across 35 languages. Job Purpose Develop and enhance SaaS/PaaS platforms for kubernetes, data-lake query, data-processing, orchestration, monitoring, and more by leveraging popular open source software and git automation Ensure the reliability and availability of our SaaS/PaaS services using SRE best practices for architecture, monitoring, alerting, etc Role Description Evaluate, contribute to, and leverage open source technologies including Kubernetes, Trino, Spark, Airflow, Prometheus + friends, ELK, Jupyter, and more in order to create and enhance our internal SaaS/PaaS platforms Be an internal sales-person for the DevOps tools and best-practices, helping to get all teams towards full adoption. The platform allows Gracenote to provide all teams with features, optimizations, updates, and security enhancements at scale and pace Diagnose technical and scalability faults on platform services which are typically built on Kubernetes Provide technical support and usage guidance to the users of our platform’s services Drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give us the visibility we need into our production services Role Requirements Passionate about software development and DevOps Competency in SQL and scripting in bash and or python for automation Experience with DevOps automation technologies like Terraform, Ansible, Secrets Management, Git, Jenkins (or similar ones) Experience with Unix/Linux based platforms Experience with containers (docker, container) Experience with AWS Services 6+ years of work experience A degree in computer science, engineering, math or related fields Desired Skills A personal tech blog Ability to code well in Java or Python (or another language and willingness to earn these two) Experience with containers (docker, containerd) and kubernetes Experience with Presto, Trino, Spark, Airflow, Prometheus, ELK, or similar technologies History of open source contributions Show more Show less

Posted 2 weeks ago

Apply

4.0 - 8.0 years

5 - 8 Lacs

Hyderabad, Bengaluru

Work from Office

Naukri logo

Why Join? Above market-standard compensation Contract-based or freelance opportunities (212 months) Work with industry leaders solving real AI challenges Flexible work locations Remote | Onsite | Hyderabad/Bangalore Your Role: Architect and optimize ML infrastructure with Kubeflow, MLflow, SageMaker Pipelines Build CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI/CD) Automate ML workflows (feature engineering, retraining, deployment) Scale ML models with Docker, Kubernetes, Airflow Ensure model observability, security, and cost optimization in cloud (AWS/GCP/Azure) Must-Have Skills: Proficiency in Python, TensorFlow, PyTorch, CI/CD pipelines Hands-on experience with cloud ML platforms (AWS SageMaker, GCP Vertex AI, Azure ML) Expertise in monitoring tools (MLflow, Prometheus, Grafana) Knowledge of distributed data processing (Spark, Kafka) (Bonus: Experience in A/B testing, canary deployments, serverless ML)

Posted 2 weeks ago

Apply

6.0 - 11.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Summary We are looking for a Senior Analytics Engineer to drive data excellence and innovation in our organization. As a thought leader in data engineering and analytics principles , you will be responsible for designing, building, and optimizing our data infrastructure while ensuring cost efficiency, security, and scalability . You will play a crucial role in managing Databricks and AWS usage , ensuring budget adherence, and taking proactive measures to optimize costs. This role also requires expertise in ETL processes, large-scale data processing, analytics, and data-driven decision-making , along with strong analytical and leadership skills. Responsibilities Act as a thought leader in data engineering and analytics, driving best practices and standards. Oversee cost management of Databricks and AWS, ensuring resource usage stays within allocated budgets and taking corrective actions when necessary. Design, implement, and optimize ETL pipelines for incremental data loading, ensuring seamless data ingestion, transformation, and performance tuning. Lead migration activities, ensuring smooth transitions while maintaining data integrity and availability. Handle massive data loads efficiently, optimizing storage, compute usage, and query performance. Adhere to Git principles for version control, ensuring best practices for collaboration and deployment. Implement and manage DSR (Airflow) workflows to automate and schedule data pipelines efficiently. Ensure data security and compliance, especially when handling PII data, aligning with regulations like GDPR and HIPAA. Optimize query performance and data storage strategies to improve cost efficiency and speed of analytics. Collaborate with data analysts and business stakeholders to enhance analytics capabilities, enabling data-driven decision-making. Develop and maintain dashboards, reports, and analytical models to provide actionable insights for business and engineering teams. Required Skills & Qualifications Four-year or Graduate Degree in Computer Science, Information Systems, or any other related discipline or commensurate work experience or demonstrated competence. 6-11 years of experience in Data Engineering, Analytics, Big Data, or related domains. Strong expertise in Databricks, AWS (S3, EC2, Lambda, RDS, Redshift, Glue, etc.), and cost optimization strategies. Hands-on experience with ETL pipelines, incremental data loads, and large-scale data processing. Proven experience in analyzing large datasets, deriving insights, and optimizing data workflows. Strong knowledge of SQL, Python, PySpark, and other data engineering and analytics tools. Strong problem-solving, analytical, and leadership skills. Experience with BI tools like Tableau, Looker, or Power BI for data visualization and reporting. Preferred Certifications Certified Software Systems Engineer (CSSE) Certified Systems Engineering Professional (CSEP) Cross-Org Skills Effective Communication Results Orientation Learning Agility Digital Fluency Customer Centricity Impact & Scope Impacts function and leads and/or provides expertise to functional project teams and may participate in cross-functional initiatives. Complexity Works on complex problems where analysis of situations or data requires an in-depth evaluation of multiple factors. Disclaimer This job description describes the general nature and level of work performed in this role. It is not intended to be an exhaustive list of all duties, skills, responsibilities, knowledge, etc. These may be subject to change and additional functions may be assigned as needed by management. Show more Show less

Posted 2 weeks ago

Apply

8.0 - 18.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Greetings from TCS!! TCS is Hiring for Data Architect Interview Mode: Virtual Required Experience: 8-18 years Work location: PAN INDIA Data Architect Technical Architect with experience in designing data platforms, experience in one of the major platforms such as snowflake, data bricks, Azure ML, AWS data platforms etc., Hands on Experience in ADF, HDInsight, Azure SQL, Pyspark, python, MS Fabric, data mesh Good to have - Spark SQL, Spark Streaming, Kafka Hands on exp in Databricks on AWS, Apache Spark, AWS S3 (Data Lake), AWS Glue, AWS Redshift / Athena Good To Have - AWS Lambda, Python, AWS CI/CD, Kafka MLflow, TensorFlow, or PyTorch, Airflow, CloudWatch If interested kindly send your updated CV and below mentioned details through E-mail: srishti.g2@tcs.com Name: E-mail ID: Contact Number: Highest qualification: Preferred Location: Highest qualification university: Current organization: Total, years of experience: Relevant years of experience: Any gap: Mention-No: of months/years (career/ education): If any then reason for gap: Is it rebegin: Previous organization name: Current CTC: Expected CTC: Notice Period: Have you worked with TCS before (Permanent / Contract ) : Show more Show less

Posted 2 weeks ago

Apply

4.0 - 8.0 years

5 - 9 Lacs

Hyderabad, Bengaluru

Work from Office

Naukri logo

Whats in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 12 months, or freelancing Be a part of an Elite Community of professionals who can solve complex AI challenges Work location could be: Remote (Highly likely) Onsite on client location Deccan AIs Office: Hyderabad or Bangalore Responsibilities: Design and architect enterprise-scale data platforms, integrating diverse data sources and tools Develop real-time and batch data pipelines to support analytics and machine learning Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices Required Skills: Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP) Proficient in data modeling, governance, warehousing (Snowflake, Redshift, BigQuery), and security/compliance standards (GDPR, HIPAA) Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana) Nice to Have: Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions Contributions to open-source data engineering communities What are the next steps? Register on our Soul AI website

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies