Jobs
Interviews

147 Apache Airflow Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

9 - 14 Lacs

Mumbai

Work from Office

We are seeking a highly skilled Senior Snowflake Developer with expertise in Python, SQL, and ETL tools to join our dynamic team. The ideal candidate will have a proven track record of designing and implementing robust data solutions on the Snowflake platform, along with strong programming skills and experience with ETL processes. Key Responsibilities: Designing and developing scalable data solutions on the Snowflake platform to support business needs and analytics requirements. Leading the end-to-end development lifecycle of data pipelines, including data ingestion, transformation, and loading processes. Writing efficient SQL queries and stored procedures to perform complex data manipulations and transformations within Snowflake. Implementing automation scripts and tools using Python to streamline data workflows and improve efficiency. Collaborating with cross-functional teams to gather requirements, design data models, and deliver high-quality solutions. Performance tuning and optimization of Snowflake databases and queries to ensure optimal performance and scalability. Implementing best practices for data governance, security, and compliance within Snowflake environments. Mentoring junior team members and providing technical guidance and support as needed. Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. 7+ years of experience working with Snowflake data warehouse. Strong proficiency in SQL with the ability to write complex queries and optimize performance. Extensive experience developing data pipelines and ETL processes using Python and ETL tools such as Apache Airflow, Informatica, or Talend. Strong Python coding experience needed minimum 2 yrs Solid understanding of data warehousing concepts, data modeling, and schema design. Experience working with cloud platforms such as AWS, Azure, or GCP. Excellent problem-solving and analytical skills with a keen attention to detail. Strong communication and collaboration skills with the ability to work effectively in a team environment. Any relevant certifications in Snowflake or related technologies would be a plus

Posted 1 month ago

Apply

8.0 - 12.0 years

22 - 27 Lacs

Indore, Chennai

Work from Office

We are hiring a Senior Python DevOps Engineer to develop scalable apps using Flask/FastAPI, automate CI/CD, manage cloud and ML workflows, and support containerized deployments in OpenShift environments. Required Candidate profile 8+ years in Python DevOps with expertise in Flask, FastAPI, CI/CD, cloud, ML workflows, and OpenShift. Skilled in automation, backend optimization, and global team collaboration.

Posted 1 month ago

Apply

6.0 - 8.0 years

8 - 15 Lacs

Hyderabad, Secunderabad

Work from Office

Strong programming and scripting skills in SQL and Python. Experience with data pipeline tools (e.g., Apache Airflow, Azure Data Factory, AWS Glue). Hands-on with cloud-based data platforms such as Azure, AWS. Familiarity with data modeling and warehousing concepts (e.g., star schema, snowflake).

Posted 1 month ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Pune, Hinjewadi

Work from Office

Job Summary Synechron is seeking an experienced and technically proficient Senior PySpark Data Engineer to join our data engineering team. In this role, you will be responsible for developing, optimizing, and maintaining large-scale data processing solutions using PySpark. Your expertise will support our organizations efforts to leverage big data for actionable insights, enabling data-driven decision-making and strategic initiatives. Software Requirements Required Skills: Proficiency in PySpark Familiarity with Hadoop ecosystem components (e.g., HDFS, Hive, Spark SQL) Experience with Linux/Unix operating systems Data processing tools like Apache Kafka or similar streaming platforms Preferred Skills: Experience with cloud-based big data platforms (e.g., AWS EMR, Azure HDInsight) Knowledge of Python (beyond PySpark), Java or Scala relevant to big data applications Familiarity with data orchestration tools (e.g., Apache Airflow, Luigi) Overall Responsibilities Design, develop, and optimize scalable data processing pipelines using PySpark. Collaborate with data engineers, data scientists, and business analysts to understand data requirements and deliver solutions. Implement data transformations, aggregations, and extraction processes to support analytics and reporting. Manage large datasets in distributed storage systems, ensuring data integrity, security, and performance. Troubleshoot and resolve performance issues within big data workflows. Document data processes, architectures, and best practices to promote consistency and knowledge sharing. Support data migration and integration efforts across varied platforms. Strategic Objectives: Enable efficient and reliable data processing to meet organizational analytics and reporting needs. Maintain high standards of data security, compliance, and operational durability. Drive continuous improvement in data workflows and infrastructure. Performance Outcomes & Expectations: Efficient processing of large-scale data workloads with minimum downtime. Clear, maintainable, and well-documented code. Active participation in team reviews, knowledge transfer, and innovation initiatives. Technical Skills (By Category) Programming Languages: Required: PySpark (essential); Python (needed for scripting and automation) Preferred: Java, Scala Databases/Data Management: Required: Experience with distributed data storage (HDFS, S3, or similar) and data warehousing solutions (Hive, Snowflake) Preferred: Experience with NoSQL databases (Cassandra, HBase) Cloud Technologies: Required: Familiarity with deploying and managing big data solutions on cloud platforms such as AWS (EMR), Azure, or GCP Preferred: Cloud certifications Frameworks and Libraries: Required: Spark SQL, Spark MLlib (basic familiarity) Preferred: Integration with streaming platforms (e.g., Kafka), data validation tools Development Tools and Methodologies: Required: Version control systems (e.g., Git), Agile/Scrum methodologies Preferred: CI/CD pipelines, containerization (Docker, Kubernetes) Security Protocols: Optional: Basic understanding of data security practices and compliance standards relevant to big data management Experience Requirements Minimum of 7+ years of experience in big data environments with hands-on PySpark development. Proven ability to design and implement large-scale data pipelines. Experience working with cloud and on-premises big data architectures. Preference for candidates with domain-specific experience in finance, banking, or related sectors. Candidates with substantial related experience and strong technical skills in big data, even from different domains, are encouraged to apply. Day-to-Day Activities Develop, test, and deploy PySpark data processing jobs to meet project specifications. Collaborate in multi-disciplinary teams during sprint planning, stand-ups, and code reviews. Optimize existing data pipelines for performance and scalability. Monitor data workflows, troubleshoot issues, and implement fixes. Engage with stakeholders to gather new data requirements, ensuring solutions are aligned with business needs. Contribute to documentation, standards, and best practices for data engineering processes. Support the onboarding of new data sources, including integration and validation. Decision-Making Authority & Responsibilities: Identify performance bottlenecks and propose effective solutions. Decide on appropriate data processing approaches based on project requirements. Escalate issues that impact project timelines or data integrity. Qualifications Bachelors degree in Computer Science, Information Technology, or related field. Equivalent experience considered. Relevant certifications are preferred: Cloudera, Databricks, AWS Certified Data Analytics, or similar. Commitment to ongoing professional development in data engineering and big data technologies. Demonstrated ability to adapt to evolving data tools and frameworks. Professional Competencies Strong analytical and problem-solving skills, with the ability to model complex data workflows. Excellent communication skills to articulate technical solutions to non-technical stakeholders. Effective teamwork and collaboration in a multidisciplinary environment. Adaptability to new technologies and emerging trends in big data. Ability to prioritize tasks effectively and manage time in fast-paced projects. Innovation mindset, actively seeking ways to improve data infrastructure and processes.

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 12 Lacs

Pune, Chennai, Bengaluru

Hybrid

Hello Candidates, We are Hiring !! Job Position - Data Streaming Engineer Experience - 5+ years Location - Mumbai, Pune , Chennai , Bangalore Work mode - Hybrid ( 3 days WFO) JOB DESCRIPTION Request for Data Streaming Engineer Data Streaming @ offshore : • Flink , Python Language. • Data Lake Systems. (OLAP Systems). • SQL (should be able to write complex SQL Queries) • Orchestration (Apache Airflow is preferred). • Hadoop (Spark and Hive: Optimization of Spark and Hive apps). • Snowflake (good to have). • Data Quality (good to have). • File Storage (S3 is good to have) NOTE - Candidates can share their resume on - shrutia.talentsketchers@gmail.com

Posted 1 month ago

Apply

6.0 - 10.0 years

6 - 16 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Role : AWS Redshift Ops + PLSQL + Unix No of years experience :6+ Detailed Job description - Skill Set: Incident Management Troubleshooting issues Contributing to development Collaborating with another team Suggesting improvements Enhancing system performance Training new employees Mandatory Skills : AWS Redshift PLSQL Apache Airflow Unix ETL DWH

Posted 1 month ago

Apply

10.0 - 15.0 years

15 - 30 Lacs

Noida, Pune, Bengaluru

Work from Office

Roles and responsibilities Work closely with the Product Owners and stake holders to design the Technical Architecture for data platform to meet the requirements of the proposed solution. Work with the leadership to set the standards for software engineering practices within the machine learning engineering team and support across other disciplines Play an active role in leading team meetings and workshops with clients. Choose and use the right analytical libraries, programming languages, and frameworks for each task. Help the Data Engineering team produce high-quality code that allows us to put solutions into production Create and own the technical product backlogs for products, help the team to close the backlogs in right time. Refactor code into reusable libraries, APIs, and tools. Help us to shape the next generation of our products. What Were Looking For Total experience in data management area for 10 + years’ experience in the implementation of modern data ecosystems in AWS/Cloud platforms. Strong experience with AWS ETL/File Movement tools (GLUE, Athena, Lambda, Kinesis and other AWS integration stack) Strong experience with Agile Development, SQL Strong experience with Two or Three AWS database technologies (Redshift, Aurora, RDS,S3 & other AWS Data Service ) covering security, policies, access management Strong programming Experience with Python and Spark Strong learning curve for new technologies Experience with Apache Airflow & other automation stack. Excellent with Data Modeling. Excellent oral and written communication skills. A high level of intellectual curiosity, external perspective, and innovation interest Strong analytical, problem solving and investigative skills Experience in applying quality and compliance requirements. Experience with security models and development on large data sets

Posted 1 month ago

Apply

6.0 - 10.0 years

18 - 25 Lacs

Bengaluru

Work from Office

Key Responsibilities: Analyzes and solve problems using technical experience, judgment and precedents Provides informal guidance to new team members Explains complex information to others in straightforward situations 1. Data Engineering and Modelling: Design & Develop Scalable Data Pipelines: Leverage AWS technologies to design, develop, and manage end-to-end data pipelines with services like ETL, Kafka, DMS, Glue, Lambda, and Step Functions . Orchestrate Workflows: Use Apache Airflow to build, deploy, and manage automated workflows, ensuring smooth and efficient data processing and orchestration. Snowflake Data Warehouse: Design, implement, and maintain Snowflake data warehouses, ensuring optimal performance, scalability, and seamless data availability. Infrastructure Automation: Utilize Terraform and CloudFormation to automate cloud infrastructure provisioning, ensuring efficiency, scalability, and adherence to security best practices. Logical & Physical Data Models: Design and implement high-performance logical and physical data models using Star and Snowflake schemas that meet both technical and business requirements. Data Modeling Tools: Utilize Erwin or similar modeling tools to create, maintain, and optimize data models, ensuring they align with evolving business needs. Continuous Optimization: Actively monitor and improve data models to ensure they deliver the best performance, scalability, and security. 2. Collaboration, Communication, and Continuous Improvement: Cross-Functional Collaboration: Work closely with data scientists, analysts, and business stakeholders to gather requirements and deliver tailored data solutions that meet business objectives. Data Security Expertise: Provide guidance on data security best practices and ensure team members follow secure coding and data handling procedures. Innovation & Learning: Stay abreast of emerging trends in data engineering, cloud computing, and data security to recommend and implement innovative solutions. Optimization & Automation: Proactively identify opportunities to optimize system performance, enhance data security, and automate manual workflows. Key Skills & Expertise: Snowflake Data Warehousing: Hands-on experience with Snowflake, including performance tuning, role-based access controls, dynamic Masking, data sharing, encryption, and row/column-level security. Data Modeling: Expertise in physical and logical data modeling, specifically with Star and Snowflake schemas using tools like Erwin or similar . AWS Services Proficiency: In-depth knowledge of AWS services like ETL, DMS, Glue, Step Functions, Airflow, Lambda, CloudFormation, S3, IAM, EKS and Terraform . Programming & Scripting: Strong working knowledge of Python, R, Scala, PySpark and SQL (including stored procedures). DevOps & CI/CD: Solid understanding of CI/CD pipelines, DevOps principles, and infrastructure-as-code practices using tools like Terraform, JFrog, Jenkins and CloudFormation . Analytical & Troubleshooting Skills: Proven ability to solve complex data engineering issues and optimize data workflows. Excellent Communication: Strong interpersonal and communication skills, with the ability to work across teams and with stakeholders to drive data-centric projects. Qualifications & Experience: Bachelors degree in computer science, Engineering, or a related field. 7-8 years of experience designing and implementing large-scale Data Lake/Warehouse integrations with diverse data storage solutions. Certifications: AWS Certified Data Analytics - Specialty or AWS Certified Solutions Architect (preferred). Snowflake Advanced Architect and/or Snowflake Core Certification ( Required ).

Posted 1 month ago

Apply

3.0 - 8.0 years

10 - 20 Lacs

Gurugram

Work from Office

Role Overview We are looking for a Senior ETL Engineer with deep expertise in Apache Airflow to design, build, and manage complex data workflows and pipelines across cloud platforms. The ideal candidate will bring strong experience in Python, SQL, and cloud-native tools (AWS/GCP) to deliver scalable and reliable data infrastructure, supporting analytics, reporting, and operational systems. Key Responsibilities Design, implement, and optimize scalable ETL/ELT workflows using Apache Airflow DAGs . Build and maintain data pipelines with Python and SQL , integrating multiple data sources. Develop robust solutions for pipeline orchestration, failure recovery, retries , and notifications. Leverage AWS or GCP services (e.g., S3, Lambda, BigQuery, Cloud Functions, IAM). Integrate with internal and external data systems via secure REST APIs and Webhooks . Monitor Airflow performance, manage DAG scheduling, and resolve operational issues. Implement observability features like logging, metrics, alerts, and pipeline health checks. Collaborate with analytics, data science, and engineering teams to support data needs. Drive automation and reusability across pipeline frameworks and templates. Ensure data quality, governance, compliance , and lineage across ETL processes. Required Skills 5+ years of experience in ETL/Data Engineering with hands-on Airflow expertise. Strong programming skills in Python , with solid experience writing production scripts. Proficiency in SQL for data manipulation, joins, and performance tuning. Deep knowledge of Apache Airflow scheduling, sensors, operators, XComs, and hooks. Experience working on cloud platforms like AWS or GCP in data-heavy environments. Comfort with REST APIs , authentication protocols, and data integration techniques. Knowledge of CI/CD tools, Git, and containerization (Docker, Kubernetes). Nice to Have Familiarity with dbt, Snowflake, Redshift, BigQuery, or other modern data platforms. Experience with Terraform or infrastructure-as-code for Airflow deployment. Understanding of data privacy, regulatory compliance (GDPR, HIPAA), and metadata tracking.

Posted 1 month ago

Apply

5.0 - 10.0 years

11 - 21 Lacs

Pune

Work from Office

Role: Data Engineer Mixed Media Model (MMM) Exp: 4 years + Location: Pune/ Remote Job Summary: The ideal candidate will have strong experience building scalable ETL pipelines and working with both online and offline marketing data to support MMM, attribution, and ROI analysis. The role requires close collaboration with data scientists and marketing teams to deliver clean, structured datasets for modeling. Mandatory Skills: Strong proficiency in SQL and Python or Scala Hands-on experience with cloud platforms (preferably GCP/BigQuery) Proven experience with ETL tools like Apache Airflow or DBT Experience integrating data from multiple sources: digital platforms (Google Ads, Meta), CRM, POS, TV, Radio, etc. Understanding of Media Mix Modeling (MMM) and attribution methodologies Good to have skill: Experience with data visualization tools (Tableau, Looker, Power BI) Exposure to statistical modeling techniques Please share your resume at Neesha1@damcogroup.com

Posted 1 month ago

Apply

4.0 - 6.0 years

4 - 8 Lacs

Bengaluru

Hybrid

Collaborate with Business Partners: Execute data vision strategy and provide thought leadership to business. Develop analytical tools and capabilities to enable data query and respond to requests. Deliver Advanced Analytics and Models: Apply advanced statistical techniques (especially AB testing) and machine learning models. Research new modeling methods and experiment designs. Data Exploration: Ingest, cleanse, and profile large structured and unstructured datasets. Generate consumer insights from big data. Data Visualization and Storytelling: Use various tools/technologies to visualize and storyline insights Recommend strategies in technical and non-technical language. Qualifications: Required: Bachelors degree in Data Science, Computer Science, Statistics, Mathematics, or related field. Proficiency in Python (2+ years) and SQL (3-4+ years). Experience with data visualization tools (Tableau, Power BI, Matplotlib). Strong understanding of statistical methods and machine learning techniques. Ability to work with large datasets and cloud-based platforms (AWS). Excellent problem-solving, critical-thinking, and communication skills. Preferred: Master’s degree in Data Science or related field. Experience in a fast-paced environment. Experience with Dash/Shiny frameworks, Git, Apache Airflow, and AWS SageMaker

Posted 1 month ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Hyderabad

Work from Office

ABOUT THE ROLE Role Description: We are seeking a highly experienced and hands-on Test Automation Engineering Manager with strong leadership skills and deep expertise in Data Integration, Data Quality , and automated data validation across real-time and batch pipelines . In this strategic role, you will lead the design, development, and implementation of scalable test automation frameworks that validate data ingestion, transformation, and delivery across diverse sources into AWS-based analytics platforms , leveraging technologies like Databricks , PySpark , and cloud-native services. As a lead , you will drive the overall testing strategy, lead a team of test engineers, and collaborate cross-functionally with data engineering, platform, and product teams. Your focus will be on delivering high-confidence, production-grade data pipelines with built-in validation layers that support enterprise analytics, ML models, and reporting platforms. The role is highly technical and hands-on , with a strong focus on automation, metadata validation , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities: Define and drive the test automation strategy for data pipelines, ensuring alignment with enterprise data platform goals. Lead and mentor a team of data QA/test engineers, providing technical direction, career development, and performance feedback. Own delivery of automated data validation frameworks across real-time and batch data pipelines using Databricks and AWS services. Collaborate with data engineering, platform, and product teams to embed data quality checks and testability into pipeline design. Design and implement scalable validation frameworks for data ingestion, transformation, and consumption layers. Automate validations for multiple data formats including JSON, CSV, Parquet, and other structured/semi-structured file types during ingestion and transformation. Automate data testing workflows for pipelines built on Databricks/Spark, integrated with AWS services like S3, Glue, Athena, and Redshift. Establish reusable test components for schema validation, null checks, deduplication, threshold rules, and transformation logic. Integrate validation processes with CI/CD pipelines, enabling automated and event-driven testing across the development lifecycle. Drive the selection and adoption of tools/frameworks that improve automation, scalability, and test efficiency. Oversee testing of data visualizations in Tableau, Power BI, or custom dashboards, ensuring backend accuracy via UI and data-layer validations. Ensure accuracy of API-driven data services, managing functional and regression testing via Postman, Python, or other automation tools. Track test coverage, quality metrics, and defect trends, providing regular reporting to leadership and ensuring continuous improvement. establishing alerting and reporting mechanisms for test failures, data anomalies, and governance violations. Contribute to system architecture and design discussions, bringing a strong quality and testability lens early into the development lifecycle. Lead test automation initiatives by implementing best practices and scalable frameworks, embedding test suites into CI/CD pipelines to enable automated, continuous validation of data workflows, catalog changes, and visualization updates Mentor and guide QA engineers, fostering a collaborative, growth-oriented culture focused on continuous learning and technical excellence. Collaborate cross-functionally with product managers, developers, and DevOps to align quality efforts with business goals and release timelines. Conduct code reviews, test plan reviews, and pair-testing sessions to ensure team-level consistency and high-quality standards. Must-Have Skills: Hands-on experience with Databricks and Apache Spark for building and validating scalable data pipelines Strong expertise in AWS services including S3, Glue, Athena, Redshift, and Lake Formation Proficient in Python, PySpark, and SQL for developing test automation and validation logic Experience validating data from various file formats such as JSON, CSV, Parquet, and Avro In-depth understanding of data integration workflows including batch and real-time (streaming) pipelines Strong ability to define and automate data quality checks : schema validation, null checks, duplicates, thresholds, and transformation validation Experience designing modular, reusable automation frameworks for large-scale data validation Skilled in integrating tests with CI/CD tools like GitHub Actions , Jenkins , or Azure DevOps Familiarity with orchestration tools such as Apache Airflow , Databricks Jobs , or AWS Step Functions Hands-on experience with API testing using Postman , pytest , or custom automation scripts Proven track record of leading and mentoring QA/test engineering teams Ability to define and own test automation strategy and roadmap for data platforms Strong collaboration skills to work with engineering, product, and data teams Excellent communication skills for presenting test results, quality metrics , and project health to leadership Contributions to internal quality dashboards or data observability systems Awareness of metadata-driven testing approaches and lineage-based validations Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Good-to-Have Skills: Experience with data governance tools such as Apache Atlas, Collibra, or Alation Understanding of DataOps methodologies and practices Familiarity with monitoring/observability tools such as Datadog, Prometheus, or CloudWatch Experience building or maintainingtest data generators Education and Professional Certifications Bachelors/Masters degree in computer science and engineering preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.

Posted 1 month ago

Apply

10.0 - 12.0 years

40 Lacs

Hyderabad

Work from Office

Location: Hyderabad (5days work from office) About the Team At DAZN, the Analytics Engineering team sits at the core of the business, transforming hundreds of data points into actionable insights that drive strategic decisions. From content strategy and product engagement to marketing optimization and revenue analytics we enable scalable, accurate, and accessible data solutions across the organization. The Role Were looking for a Lead Analytics Engineer to take ownership of our analytics data pipelines and play a critical role in designing and scaling DAZNs modern data stack. This is a hands-on technical leadership role where youll shape robust data models using dbt and Snowflake , orchestrate workflows via Airflow , and ensure high-quality, trusted data is delivered for analytical and reporting needs. Key Responsibilities Lead the development and governance of semantic data models to support consistent, reusable metrics. Architect scalable data transformations on Snowflake using SQL and dbt , applying data warehousing best practices. Manage and enhance pipeline orchestration with Airflow , ensuring timely and reliable data processing. Collaborate closely with teams across Product, Finance, Marketing, and Tech to translate business needs into technical data models. Establish and maintain best practices around version control, testing, and CI/CD for analytics workflows. Mentor junior engineers and promote a culture of technical excellence and peer learning . Champion data quality, documentation, and observability across the analytics stack. What Youll Need to Have 10+ years of experience in data or analytics engineering , with 2+ years in a leadership or mentoring role . Strong hands-on experience with SQL , dbt , and Snowflake . Experience with cloud platforms (AWS, GCP, or Azure). Proven expertise in pipeline orchestration tools such as Apache Airflow , Prefect, or Luigi. Deep understanding of dimensional modeling , ELT patterns, and data governance. Ability to navigate both technical deep dives and high-level stakeholder collaboration . Excellent communication skills with both technical and non-technical teams. Nice to Have Background in media, OTT, or sports tech domains. Familiarity with BI tools like Looker , Power BI , or Tableau. Exposure to data testing frameworks such as dbt tests or Great Expectations .

Posted 1 month ago

Apply

8.0 - 12.0 years

15 - 20 Lacs

Pune

Work from Office

We are looking for a highly experienced Lead Data Engineer / Data Architect to lead the design, development, and implementation of scalable data pipelines, data Lakehouse, and data warehousing solutions. The ideal candidate will provide technical leadership to a team of data engineers, drive architectural decisions, and ensure best practices in data engineering. This role is critical in enabling data-driven decision-making and modernizing our data infrastructure. Key Responsibilities: Act as a technical leader responsible for guiding the design, development, and implementation of data pipelines, data Lakehouse, and data warehousing solutions. Lead a team of data engineers, ensuring adherence to best practices and standards. Drive the successful delivery of high-quality, scalable, and reliable data solutions. Play a key role in shaping data architecture, adopting modern data technologies, and enabling data-driven decision-making across the team. Provide technical vision, guidance, and mentorship to the team. Lead technical design discussions, perform code reviews, and contribute to architectural decisions.

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Pune

Work from Office

Role Overview: An AWS SME with a Data Science Background is responsible for leveraging Amazon Web Services (AWS) to design, implement, and manage data-driven solutions. This role involves a combination of cloud computing expertise and data science skills to optimize and innovate business processes. Key Responsibilities: Data Analysis and Modelling: Analyzing large datasets to derive actionable insights and building predictive models using AWS services like SageMaker, Bedrock, Textract etc. Cloud Infrastructure Management: Designing, deploying, and maintaining scalable cloud infrastructure on AWS to support data science workflows. Machine Learning Implementation: Developing and deploying machine learning models using AWS ML services. Security and Compliance: Ensuring data security and compliance with industry standards and best practices. Collaboration: Working closely with cross-functional teams, including data engineers, analysts, DevOps and business stakeholders, to deliver data-driven solutions. Performance Optimization: Monitoring and optimizing the performance of data science applications and cloud infrastructure. Documentation and Reporting: Documenting processes, models, and results, and presenting findings to stakeholders. Skills & Qualifications Technical Skills: Proficiency in AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker). Strong programming skills in Python. Experience with AI/ML project life cycle steps. Knowledge of machine learning algorithms and frameworks (e.g., TensorFlow, Scikit-learn). Familiarity with data pipeline tools (e.g., AWS Glue, Apache Airflow). Excellent communication and collaboration abilities.

Posted 1 month ago

Apply

8.0 - 12.0 years

12 - 17 Lacs

Chennai

Work from Office

Strong working experience in Python programming. Expertise with one of the Python frameworks - pyspark, Strong experience with using pandas, numpy, joblib and other popular libraries. Must have experience With AWS EMR and Pyspark Good working experience with parallel batch processing with python Good working experience On AWS Batch and Step functions Should have the expertise to write an effective, scalable, highly performable code Good to have Apache Airflow Should have implemented 2 or more large-scale projects and be a part of end-to-end system implementation. Good analytical and problem-solving skills

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Essential Responsibilities: As a Senior Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 8 years of software engineering experience. An undergraduate degree in Computer Science (or a related field) from a university where the primary language of instruction is English is strongly desired. 2+ Years of Experience/Fluency in Python Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive. Experience working with container-based solutions is a plus. Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring. Experience in Advertising Attribution domain is a plus Experience in agile software development processes Excellent interpersonal and communication skills

Posted 1 month ago

Apply

2.0 - 4.0 years

4 - 6 Lacs

Hyderabad

Work from Office

CDP ETL & Database Engineer The CDP ETL & Database Engineer will specialize in architecting, designing, and implementing solutions that are sustainable and scalable. The ideal candidate will understand CRM methodologies, with an analytical mindset, and a background in relational modeling in a Hybrid architecture. The candidate will help drive the business towards specific technical initiatives and will work closely with the Solutions Management, Delivery, and Product Engineering teams. The candidate will join a team of developers across the US, India & Costa Rica. Responsibilities : ETL Development The CDP ETL & Database Engineer will be responsible for building pipelines to feed downstream data processes. They will be able to analyze data, interpret business requirements, and establish relationships between data sets. The ideal candidate will be familiar with different encoding formats and file layouts such as JSON and XML. I mplementations & Onboarding Will work with the team to onboard new clients onto the ZMP/CDP+ platform. The candidate will solidify business requirements, perform ETL file validation, establish users, perform complex aggregations, and syndicate data across platforms. The hands-on engineer will take a test-driven approach towards development and will be able to document processes and workflows. Incremental Change Requests The CDP ETL & Database Engineer will be responsible for analyzing change requests and determining the best approach towards implementation and execution of the request. This requires the engineer to have a deep understanding of the platform's overall architecture. Change requests will be implemented and tested in a development environment to ensure their introduction will not negatively impact downstream processes. Change Data Management The candidate will adhere to change data management procedures and actively participate in CAB meetings where change requests will be presented and approved. Prior to introducing change, the engineer will ensure that processes are running in a development environment. The engineer will be asked to do peer-to-peer code reviews and solution reviews before production code deployment. Collaboration & Process Improvement The engineer will be asked to participate in knowledge share sessions where they will engage with peers, discuss solutions, best practices, overall approach, and process. The candidate will be able to look for opportunities to streamline processes with an eye towards building a repeatable model to reduce implementation duration. Job Requirements : The CDP ETL & Database Engineer will be well versed in the following areas: Relational data modeling ETL and FTP concepts Advanced Analytics using SQL Functions Cloud technologies - AWS, Snowflake Able to decipher requirements, provide recommendations, and implement solutions within predefined timeframes. The ability to work independently, but at the same time, the individual will be called upon to contribute in a team setting. The engineer will be able to confidently communicate status, raise exceptions, and voice concerns to their direct manager. Participate in internal client project status meetings with the Solution/Delivery management teams. When required, collaborate with the Business Solutions Analyst (BSA) to solidify requirements. Ability to work in a fast paced, agile environment; the individual will be able to work with a sense of urgency when escalated issues arise. Strong communication and interpersonal skills, ability to multitask and prioritize workload based on client demand. Familiarity with Jira for workflow mgmt., and time allocation. Familiarity with Scrum framework, backlog, planning, sprints, story points, retrospectives etc. Required Skills : ETL ETL tools such as Talend (Preferred, not required) DMExpress Nice to have Informatica Nice to have Database - Hands on experience with the following database Technologies Snowflake (Required) MYSQL/PostgreSQL Nice to have Familiar with NOSQL DB methodologies (Nice to have) Programming Languages Can demonstrate knowledge of any of the following. PLSQL JavaScript Strong Plus Python - Strong Plus Scala - Nice to have AWS Knowledge of the following AWS services: S3 EMR (Concepts) EC2 (Concepts) Systems Manager / Parameter Store Understands JSON Data structures, key value pair. Working knowledge of Code Repositories such as GIT, Win CVS, SVN. Workflow management tools such as Apache Airflow, Kafka, Automic/Appworx Jira Minimum Qualifications Bachelor's degree or equivalent 2-4 Years' experience Excellent verbal & written communications skills Self-Starter, highly motivated Analytical mindset

Posted 1 month ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Bengaluru

Work from Office

As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations Essential Responsibilities: As a Senior Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 8 years of software engineering experience. An undergraduate degree in Computer Science (or a related field) from a university where the primary language of instruction is English is strongly desired. 2+ Years of Experience/Fluency in Python Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive. Experience working with container-based solutions is a plus. Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring. Experience in Advertising Attribution domain is a plus Experience in agile software development processes Excellent interpersonal and communication skills

Posted 1 month ago

Apply

7.0 - 12.0 years

9 - 15 Lacs

Bengaluru

Work from Office

We are looking for lead or principal software engineers to join our Data Cloud team. Our Data Cloud team is responsible for the Zeta Identity Graph platform, which captures billions of behavioural, demographic, environmental, and transactional signals, for people-based marketing. As part of this team, the data engineer will be designing and growing our existing data infrastructure to democratize data access, enable complex data analyses, and automate optimization workflows for business and marketing operations. Job Description: Essential Responsibilities: As a Lead or Principal Data Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as HDFS, Spark, Snowflake, Hive, HBase, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in 24/7 on-call rotation (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 7 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and onpremises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark, HDFS, Hive, HBase Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience with web frameworks such as Flask, Django .

Posted 1 month ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Senior Software Engineer Location: Bengaluru As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations Essential Responsibilities: As a Senior Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 8 years of software engineering experience. An undergraduate degree in Computer Science (or a related field) from a university where the primary language of instruction is English is strongly desired. 2+ Years of Experience/Fluency in Python Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive. Experience working with container-based solutions is a plus. Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring. Experience in Advertising Attribution domain is a plus Experience in agile software development processes Excellent interpersonal and communication skills

Posted 1 month ago

Apply

2.0 - 4.0 years

4 - 6 Lacs

Hyderabad

Work from Office

The CDP ETL & Database Engineer will specialize in architecting, designing, and implementing solutions that are sustainable and scalable. The ideal candidate will understand CRM methodologies, with an analytical mindset, and a background in relational modeling in a Hybrid architecture. The candidate will help drive the business towards specific technical initiatives and will work closely with the Solutions Management, Delivery, and Product Engineering teams. The candidate will join a team of developers across the US, India & Costa Rica. Responsibilities : ETL Development The CDP ETL & Database Engineer will be responsible for building pipelines to feed downstream data processes. They will be able to analyze data, interpret business requirements, and establish relationships between data sets. The ideal candidate will be familiar with different encoding formats and file layouts such as JSON and XML. I mplementations & Onboarding Will work with the team to onboard new clients onto the ZMP/CDP+ platform. The candidate will solidify business requirements, perform ETL file validation, establish users, perform complex aggregations, and syndicate data across platforms. The hands-on engineer will take a test-driven approach towards development and will be able to document processes and workflows. Incremental Change Requests The CDP ETL & Database Engineer will be responsible for analyzing change requests and determining the best approach towards implementation and execution of the request. This requires the engineer to have a deep understanding of the platform's overall architecture. Change requests will be implemented and tested in a development environment to ensure their introduction will not negatively impact downstream processes. Change Data Management The candidate will adhere to change data management procedures and actively participate in CAB meetings where change requests will be presented and approved. Prior to introducing change, the engineer will ensure that processes are running in a development environment. The engineer will be asked to do peer-to-peer code reviews and solution reviews before production code deployment. Collaboration & Process Improvement The engineer will be asked to participate in knowledge share sessions where they will engage with peers, discuss solutions, best practices, overall approach, and process. The candidate will be able to look for opportunities to streamline processes with an eye towards building a repeatable model to reduce implementation duration. Job Requirements : The CDP ETL & Database Engineer will be well versed in the following areas: Relational data modeling ETL and FTP concepts Advanced Analytics using SQL Functions Cloud technologies - AWS, Snowflake Able to decipher requirements, provide recommendations, and implement solutions within predefined timeframes. The ability to work independently, but at the same time, the individual will be called upon to contribute in a team setting. The engineer will be able to confidently communicate status, raise exceptions, and voice concerns to their direct manager. Participate in internal client project status meetings with the Solution/Delivery management teams. When required, collaborate with the Business Solutions Analyst (BSA) to solidify requirements. Ability to work in a fast paced, agile environment; the individual will be able to work with a sense of urgency when escalated issues arise. Strong communication and interpersonal skills, ability to multitask and prioritize workload based on client demand. Familiarity with Jira for workflow mgmt., and time allocation. Familiarity with Scrum framework, backlog, planning, sprints, story points, retrospectives etc. Required Skills : ETL ETL tools such as Talend (Preferred, not required) DMExpress Nice to have Informatica Nice to have Database - Hands on experience with the following database Technologies Snowflake (Required) MYSQL/PostgreSQL Nice to have Familiar with NOSQL DB methodologies (Nice to have) Programming Languages Can demonstrate knowledge of any of the following. PLSQL JavaScript Strong Plus Python - Strong Plus Scala - Nice to have AWS Knowledge of the following AWS services: S3 EMR (Concepts) EC2 (Concepts) Systems Manager / Parameter Store Understands JSON Data structures, key value pair. Working knowledge of Code Repositories such as GIT, Win CVS, SVN. Workflow management tools such as Apache Airflow, Kafka, Automic/Appworx Jira Minimum Qualifications Bachelor's degree or equivalent 2-4 Years' experience Excellent verbal & written communications skills Self-Starter, highly motivated Analytical mindset.

Posted 1 month ago

Apply

4.0 - 5.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Experience: 5+years Location: Bengaluru Role Overview We are looking for a Senior Data Engineer who will play akey role in designing, building, and maintaining data ingestion frameworks andscalable data pipelines. The ideal candidate should have strong expertise inplatform architecture, data modeling, and cloud-based data solutions to supportreal-time and batch processing needs. What you'll be doing: Design, develop, and optimise DBT models to support scalable data transformations Architect and implement modern ELT pipelines using DBT and orchestration tools like Apache Airflow and Prefect Lead performance tuning and query optimization for DBT models running on Snowflake, Redshift, or Databricks Integrate DBT workflows & pipelines with AWS services (S3, Lambda, Step Functions, RDS, Glue) and event-driven architectures Implement robust data ingestion processes from multiple sources, including manufacturing execution systems (MES), Manufacturing stations, and web applications Manage and monitor orchestration tools (Airflow, Prefect) for automated DBT model execution Implement CI/CD best practices for DBT, ensuring version control, automated testing, and deployment workflows Troubleshoot data pipeline issues and provide solutions for optimizing cost and performance. What you'll have: 5+ years of hands-on experience with DBT, including model design, testing, and performance tuning 5+ years of Strong SQL expertise with experience in analytical query optimization and database performance tuning 5+ years of programming experience, especially in building custom DBT macros, scripts, APIs, working with AWS services using boto3 3+ years of Experience with orchestration tools like Apache Airflow, Prefect for scheduling DBT jobs Hands-on experience in modern cloud data platforms like Snowflake, Redshift, Databricks, or Big Query Experience with AWS data services (S3, Lambda, Step Functions, RDS, SQS, CloudWatch) Familiarity with serverless architectures and infrastructure as code (CloudFormation/Terraform) Ability to effectively communicate timelines and deliver MVPs set for the sprint Strong analytical and problem-solving skills, with the ability to work across cross-functional teams. Nice to have Experience in hardware manufacturing data processing Contributions to open-source data engineering tools Knowledge of Tableau or other BI tools for data visualization Understanding of front-end development (React, JavaScript, or similar) to collaborate effectively with UI teams or build internal tools for data visualization

Posted 2 months ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Bengaluru

Work from Office

Are you a seasoned data engineer with a passion for hands-on technical work? Do you thrive in an environment that values innovation, collaboration, and cutting-edge technologies? We are looking for a seasoned Integration Engineer to join our team, someone who is passionate about building and maintaining scalable data pipelines and integrations. The ideal candidate will have a strong foundation in Python programming, experience with Snowflake for data warehousing, proficiency in AWS and Kubernetes (EKS) for cloud services management, and expertise in CI/CD practices, Apache Airflow, DBT, and API development. This role is critical to enhancing our data integration capabilities and supporting our data-driven initiatives. Role and Responsibilities: As the Technical Data Integration Engineer, you will play a pivotal role in shaping the future of our data integration engineering initiatives. You will be part of talented data integration engineers while remaining actively involved in the technical aspects of the projects. Your responsibilities will include: Hands-On Contribution: Continue to be hands-on with data integration engineering tasks, including data pipeline development, EL processes, and data integration. Be the go-to expert for complex technical challenges. Integrations Architecture: Design and implement scalable and efficient data integration architectures that meet business requirements. Ensure data integrity, quality, scalability, and security throughout the pipeline. Tool Proficiency: Leverage your expertise in Snowflake, SQL, Apache Airflow, AWS, API, and Python to architect, develop, and optimize data solutions. Stay current with emerging technologies and industry best practices. Data Quality: Monitor data quality and integrity, implementing data governance policies as needed. Cross-Functional Collaboration: Collaborate with data science, data warehousing, analytics, and other cross-functional teams to understand data requirements and deliver actionable insights. Performance Optimization :Identify and address performance bottlenecks within the data infrastructure. Optimize data pipelines for speed, reliability, and efficiency. Qualifications Minimum Bachelor's degree in Computer Science, Engineering, or related field. Advanced degree is a plus. 5 years of hands-on experience in data engineering. Familiarity with cloud platforms, such as AWS or Azure. Expertise in Apache Airflow, Snowflake, SQL, Python, Shell scripting, API gateways, web services setup. Strong experience in full-stack development, AWS, Linux administration, data lake construction, data quality assurance, and integration metrics. Excellent analytical, problem-solving, and decision-making abilities. Strong communication skills, with the ability to articulate technical concepts to non-technical stakeholders. A collaborative mindset, with a focus on team success. If you are a results-oriented Data Integration Engineer with a strong background in Apache Airflow, Snowflake, SQL, Python and API, we encourage you to apply. Join us in building data solutions that drive business success and innovation

Posted 2 months ago

Apply

5.0 - 7.0 years

30 - 40 Lacs

Bengaluru

Hybrid

Senior Software Developer (Python) Experience: 5 - 7 Years Exp Salary : Upto USD 40,000 / year Preferred Notice Period : Within 60 Days Shift : 11:00AM to 8:00PM IST Opportunity Type: Hybrid (Bengaluru) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Apache Airflow, Astronomer, Pandas/Pyspark/Dask, RESTful API, Snowflake, Docker, Python, SQL Good to have skills : CI/CD, Data Vizualization, Matplotlib, Prometheus, AWS, Kubernetes A Single Platform for Loans/Securities & Finance (One of Uplers' Clients) is Looking for: Senior Software Developer (Python) who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Summary We are seeking a highly skilled Senior Python Developer with expertise in large-scale data processing and Apache Airflow. The ideal candidate will be responsible for designing, developing, and maintaining scalable data applications and optimizing data pipelines. You will be an integral part of our R&D and Technical Operations team, focusing on data engineering, workflow automation, and advanced analytics. Key Responsibilities Design and develop sophisticated Python applications for processing and analyzing large datasets. Implement efficient and scalable data pipelines using Apache Airflow and Astronomer. ¢ Create, optimize, and maintain Airflow DAGs for complex workflow orchestration. ¢ Work with data scientists to implement and scale machine learning models. ¢ Develop robust APIs and integrate various data sources and systems. ¢ Optimize application performance for handling petabyte-scale data operations. ¢ Debug, troubleshoot, and enhance existing Python applications. ¢ Write clean, maintainable, and well-tested code following best practices. ¢ Participate in code reviews and mentor junior developers. ¢ Collaborate with cross-functional teams to translate business requirements into technical solutions. Required Skills & Qualifications ¢ Strong programming skills in Python with 5+ years of hands-on experience. ¢ Proven experience working with large-scale data processing frameworks (e.g., Pandas, PySpark, Dask). ¢ Extensive hands-on experience with Apache Airflow for workflow orchestration. ¢ Experience with Astronomer platform for Airflow deployment and management. ¢ Proficiency in SQL and experience with Snowflake database. ¢ Expertise in designing and implementing RESTful APIs. ¢ Basic knowledge of Java programming. ¢ Experience with containerization technologies (Docker). ¢ Strong problem-solving skills and the ability to work independently. Preferred Skills ¢ Experience with cloud platforms (AWS). ¢ Knowledge of CI/CD pipelines and DevOps practices. ¢ Familiarity with Kubernetes for container orchestration. ¢ Experience with data visualization libraries (Matplotlib, Seaborn, Plotly). ¢ Background in financial services or experience with financial data. ¢ Proficiency in monitoring tools like Prometheus, Grafana, and ELK stack. Engagement Type: Fulltime Direct-hire on Riskspan Payroll Job Type: Permanent Location: Hybrid (Bangalore Working time: 11:00 AM to 8:00 PM Interview Process - 3- 4 Rounds How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: RiskSpan uncovers insights and mitigates risk for mortgage loans and structured products. The Edge Platform provides data and predictive models to run forecasts under a range of scenarios and analyze Agency and non-Agency MBS, loans, and MSRs. Leverage our bleeding-edge cloud, machine learning, and AI capabilities to scale faster, optimize model builds, and manage information more efficiently. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies