Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 7.0 years
15 - 17 Lacs
Hyderabad, Bengaluru
Work from Office
Design, develop, and implement data solutions using AWS Data Stack components such as Glue and Redshift.Write and optimize advanced SQL queries for data extraction, transformation, and analysis.Develop data processing workflows and ETL processes using Python and PySpark.
Posted 3 weeks ago
3.0 - 5.0 years
9 - 13 Lacs
Pune
Work from Office
Degree in Computer Science (or similar), alternatively well-founded professional experience in the desired field Roles Responsibilities: A Cloud Engineer (DevOps) in AWS is responsible for designing, implementing, and managing AWS-based solutions. This role involves ensuring the scalability, security, and efficiency of AWS infrastructure to support business operations and development activities. Collaborate with cross-functional teams to optimize cloud services and drive innovation. TasksDesign and implement scalable, secure, and reliable AWS cloud infrastructureManage and optimize AWS resources to ensure cost-efficiencyDevelop and maintain Infrastructure as Code (IaC) scriptsMonitor system performance and troubleshoot issuesImplement security best practices and compliance measuresCollaborate with development teams to support application deploymentAutomate operational tasks using scripting and automation toolsConduct regular system audits and generate reportsStay updated with the latest AWS features and industry trendsProvide technical guidance and support to team members Requirements: At least 5 years of experience as a AWS cloud engineer or AWS architect, preferably in the automotive sector. Business fluent in English (at least C1) Very good communication an d presentation skills Required Skill Set: Proficiency in AWS services (S3, ECS, Lambda, Glue, Athena, EC2, SageMaker, Batch Processing, Bedrock, API Gateway, Security Hub, AWS Inspector, etc.) Strong understanding of cloud architecture and best practices Experience with infrastructure as code (IaC) tool AWS CDK with a programming language like Python or Typescript Knowledge of networking concepts, security protocols and sonarqube Familiarity with CI/CD pipelines and DevOps practices in Gitlab Ability to troubleshoot and resolve technical issues Scripting skills (Python, Bash, etc.) Experience with monitoring and logging tools (CloudWatch, CloudTrail)Understanding of containerization (Docker, ECS) Excellent communication and collaboration skills
Posted 3 weeks ago
3.0 - 5.0 years
5 - 9 Lacs
Pune
Work from Office
Qualification: Degree in Computer Science (or similar), alternatively well-founded professional experience in the desired field Roles Responsibilities: As a Senior Data Engineer, you manage and develop the solutions in close alignment with various business and Spoke stakeholders. You are responsible for the implementation of the IT governance guidelines. Collaborate with the Spokes Data Scientists, Data Analysts, and Business Analysts, when relevant. Tasks Create and manage data pipeline architecture for data ingestion, pipeline setup and data curation Experience working with and creating cloud data solutions Assemble large, complex data sets that meet functional/non-functional business requirements Implement the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Pyspark, SQL and AWS big data-technologies Build analytics tools that use the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics Manipulate data at scale: getting data in a ready-to-use state in close alignment with various business an d Spoke stak eholders Must Have: Advanced knowledge: ETL Data Lake, Data Warehouse, RDS architectures knowledge Python, SQL (Any other OOP language is also valuable) Pyspark (preferably) or Spark Knowledge Object-oriented programming, Clean Code and good documentation skills AWS: S3, Athena, Lambda, Glue, IAM, SQS, EC2, Quicksight, and etc. Git Data Analysis Visualization Optional: AWS CDK Cloud Development Kit CI/CD knowledge
Posted 3 weeks ago
7.0 - 12.0 years
10 - 14 Lacs
Gurugram
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As part of a Data Transformation programme you will be part of the Data Marketplace team. In this team you will be responsible for the design and implementation of dashboard for assessing compliance to controls and policies at various stages of the data product lifecycle, with centralised compliance scoring.Preferable having experience with data product lifecycle.Example skills Data Visualisation, Amazon Quicksight, Tableau, PowerBI, Qlik, Data Analysis & Interpretation.As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will oversee the development process and ensure successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Ensure timely project delivery- Provide technical guidance and support to the team Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Glue- Strong understanding of cloud computing principles- Experience with data integration and ETL processes- Hands-on experience in designing and implementing scalable applications- Knowledge of data warehousing concepts Additional Information:- The candidate should have a minimum of 7.5 years of experience in AWS Glue- This position is based at our Gurugram office- A 15 years full-time education is required Qualification 15 years full time education
Posted 3 weeks ago
2.0 - 5.0 years
6 - 10 Lacs
Kochi
Work from Office
Job description: Seeking a skilled & proactive Data Engineer with 24 years of experience to support our enterprise data warehousing and analytics initiatives. The Candidate will be responsible for building scalable data pipelines, transforming data for analytics, and enabling data integration across cloud and On-premise systems. Key Responsibilities: Build and manage data lakes and data warehouses using services like Amazon S3, Redshift, and Athena Design and build secure, scalable, and efficient ETL/ELT pipelines on AWS using services like Glue, Lambda, Step Functions Work on SAP Datasphere to build and maintain Spaces, Data Builders, Views, and Consumption Layers Develop and maintain scalable data models and optimize queries for performance Monitor and optimize data workflows to ensure reliability, performance, and cost-efficiency Collaborate with Data Analysts and BI teams to provide clean, validated, and well-documented datasets Monitor, troubleshoot, and enhance data workflows and pipelines Ensure data quality, integrity, and governance policies are met Required Skills Strong SQL skills and experience with relational databases like MySQL, or SQL Server Proficient in Python or Scala for data transformation and scripting Familiarity with cloud platforms like AWS (S3, Redshift, Glue), Azure Good-to-Have Skills AWS Certification AWS Certified Data Analytics Exposure to modern data stack tools like Snowflake Experience in cloud-based projects and working in an Agile environment Understanding of data governance, security best practices, and compliance standards
Posted 3 weeks ago
8.0 - 13.0 years
27 - 35 Lacs
Kochi, Bengaluru
Work from Office
About Us. DBiz Solution is a Transformational Partner. Digital transformation is intense. Wed like for you to have something to hold on to, whilst you set out bringing your ideas into existence. Beyond anything, we put humans first. This means solving real problems with real people and providing needs with real, working solutions. DBiz leverages a wealth of experience building a variety of software to improve our client's ability to respond to change and build tomorrows digital business. Were quite proud of our record of accomplishment. Having delivered over 150 projects for over 100 clients, we can honestly say we leave our clients happy and wanting more. Using data, we aim to unlock value and create platforms/products at scale that can evolve with business strategies using our innovative Rapid Application Development methodologies. The passion for creating an impact: Our passion for creating an impact drive everything we do. We believe that technology has the power to transform businesses and improve lives, and it is our mission to harness this power to make a difference. We constantly strive to innovate and deliver solutions that not only meet our client's needs but exceed their expectations, allowing them to achieve their goals and drive sustainable growth. Through our world-leading digital transformation strategies, we are always growing and improving. That means creating an environment where every one of us can strive together for excellence. Senior Data Engineer AWS (Glue, Data Warehousing, Optimization & Security) Experienced Senior Data Engineer (8+ Yrs) with deep expertise in AWS cloud Data services, particularly AWS Glue, to design, build, and optimize scalable data solutions. The ideal candidate will drive end-to-end data engineering initiatives from ingestion to consumption — with a strong focus on data warehousing, performance optimization, self-service enablement, and data security. The candidate needs to have experience in doing consulting and troubleshooting exercise to design best-fit solutions. Key Responsibilities Consult with business and technology stakeholders to understand data requirements, troubleshoot and advise on best-fit AWS data solutions Design and implement scalable ETL pipelines using AWS Glue, handling structured and semi-structured data Architect and manage modern cloud data warehouses (e.g., Amazon Redshift, Snowflake, or equivalent) Optimize data pipelines and queries for performance, cost-efficiency, and scalability Develop solutions that enable self-service analytics for business and data science teams Implement data security, governance, and access controls Collaborate with data scientists, analysts, and business stakeholders to understand data needs Monitor, troubleshoot, and improve existing data solutions, ensuring high availability and reliability Required Skills & Experience 8+ years of experience in data engineering in AWS platform Strong hands-on experience with AWS Glue, Lambda, S3, Athena, Redshift, IAM Proven expertise in data modelling, data warehousing concepts, and SQL optimization Experience designing self-service data platforms for business users Solid understanding of data security, encryption, and access management Proficiency in Python Familiarity with DevOps practices & CI/CD Strong problem-solving Exposure to BI tools (e.g., QuickSight, Power BI, Tableau) for self-service enablement Preferred Qualifications AWS Certified Data Analytics – Specialty or Solutions Architect – Associate
Posted 3 weeks ago
2.0 - 7.0 years
3 - 4 Lacs
Navi Mumbai
Work from Office
MRM Executive will be responsible for analyzing medical information of US patients on various Electronic Health Record (EHR) platforms. Using predefined rules, they will perform various roles including creating & updating patient charts and orders. Required Candidate profile The role includes data entry tasks, extracting patient medical information from Zoho CRM, and uploads to various EHRs. Assist with billing, accounting, report generation, and maintaining trackers.
Posted 3 weeks ago
12.0 - 17.0 years
30 - 45 Lacs
Bengaluru
Work from Office
Work Location: Bangalore Experience :10+yrs Required Skills: Experience AWS cloud and AWS services such as S3 Buckets, Lambda, API Gateway, SQS queues; Experience with batch job scheduling and identifying data/job dependencies; Experience with data engineering using AWS platform and Python; Familiar with AWS Services like EC2, S3, Redshift/Spectrum, Glue, Athena, RDS, Lambda, and API gateway; Familiar with software DevOps CI/CD tools, such Git, Jenkins, Linux, and Shell Script Thanks & Regards Suganya R suganya@spstaffing.in
Posted 4 weeks ago
4.0 - 9.0 years
15 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Work Location: Bangalore, Chennai, Hyderabad, Pune, Bhubaneshwar, Kochi Experience: 4-6yrs Required Skills: Experience in Pyspark Experience in AWS/Glue Please share your updated profile to suganya@spstaffing.in if you are actively looking for change
Posted 4 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
Mumbai
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala, Python HBase, Hive Good to have Aws -S3, Athena, Dynamo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark Data Frames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 4 weeks ago
7.0 - 10.0 years
25 - 30 Lacs
Navi Mumbai
Work from Office
We are looking for a highly skilled Data Catalog Engineer to join our team at Serendipity Corporate Services, with 6-8 years of experience in the IT Services & Consulting industry. Roles and Responsibility Design and implement data cataloging solutions to meet business requirements. Develop and maintain large-scale data catalogs using various tools and technologies. Collaborate with cross-functional teams to identify and prioritize data needs. Ensure data quality and integrity by implementing data validation and testing procedures. Optimize data catalog performance by analyzing query logs and identifying improvement areas. Provide technical support and training to end-users on data catalog usage. Job Requirements Strong understanding of data modeling and database design principles. Experience with data management tools such as SQL and NoSQL databases. Proficiency in programming languages such as Python or Java. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.
Posted 1 month ago
1.0 - 4.0 years
3 - 5 Lacs
Hyderabad, Mumbai (All Areas)
Work from Office
WE ARE HIRING - AR CALLER - Mumbai, Hyderabad Experience :- Min 1 year into AR caller Mumbai Location :- Package : Max Upto 40k take home Qualification : inter & above Hyderabad Location :- Package : Max Upto 33k take home Qualification : graduation Shift Timings :- 6:30 PM to 3:30 AM WFO Virtual and Walk-in interviews available WE ARE HIRING - Prior Authorization - Mumbai, Chennai Experience :- Min 1 year into Prior Authorization Mumbai Location :- Package : Max Upto 4.6 Lpa Qualification : Graduate Mandate Chennai Location :- Package : Max Upto 40k Qualification : Inter & above Shift Timings :- 6:30 PM to 3:30 AM WFO Virtual and Walk-in interviews available Perks and Benefits 1. 2 way cab 2. Incentives Interested candidates can share your updated resume to HR SAHARIKA - 9951772874(share resume via WhatsApp ) Refer your friend's / Colleagues
Posted 1 month ago
5.0 - 10.0 years
20 - 27 Lacs
Pune
Hybrid
Job Description Job Duties and Responsibilities: We are looking for a self-starter to join our Data Engineering team. You will work in a fast-paced environment where you will get an opportunity to build and contribute to the full lifecycle development and maintenance of the data engineering platform. With the Data Engineering team you will get an opportunity to - Design and implement data engineering solutions that is scalable, reliable and secure on the Cloud environment Understand and translate business needs into data engineering solutions Build large scale data pipelines that can handle big data sets using distributed data processing techniques that supports the efforts of the data science and data application teams Partner with cross-functional stakeholder including Product managers, Architects, Data Quality engineers, Application and Quantitative Science end users to deliver engineering solutions Contribute to defining data governance across the data platform Basic Requirements: A minimum of a BS degree in computer science, software engineering, or related scientific discipline is desired 3+ years of work experience in building scalable and robust data engineering solutions Strong understanding of Object Oriented programming and proficiency with programming in Python (TDD) and Pyspark to build scalable algorithms 3+ years of experience in distributed computing and big data processing using the Apache Spark framework including Spark optimization techniques 2+ years of experience with Databricks, Delta tables, unity catalog, Delta Sharing, Delta live tables(DLT) and incremental data processing Experience with Delta lake, Unity Catalog Advanced SQL coding and query optimization experience including the ability to write analytical and nested queries 3+ years of experience in building scalable ETL/ ELT Data Pipelines on Databricks and AWS (EMR) 2+ Experience of orchestrating data pipelines using Apache Airflow/ MWAA Understanding and experience of AWS Services that include ADX, EC2, S3 3+ years of experience with data modeling techniques for structured/ unstructured datasets Experience with relational/columnar databases - Redshift, RDS and interactive querying services - Athena/ Redshift Spectrum Passion towards healthcare and improving patient outcomes Demonstrate analytical thinking with strong problem solving skills Stay on top of emerging technologies and posses willingness to learn.
Posted 1 month ago
7.0 - 9.0 years
25 - 30 Lacs
Navi Mumbai
Work from Office
Key Responsibilities: Lead the end-to-end implementation of a data cataloging solution within AWS (preferably AWS Glue Data Catalog or third-party tools like Apache Atlas, Alation, Collibra, etc.). Establish and manage metadata frameworks for structured and unstructured data assets in the data lake and data warehouse environments. Integrate the data catalog with AWS-based storage solutions such as S3, Redshift, Athena, Glue, and EMR. Collaborate with data Governance/BPRG/IT projects teams to define metadata standards, data classifications, and stewardship processes. Develop automation scripts for catalog ingestion, lineage tracking, and metadata updates using Python, Lambda, Pyspark or Glue/EMR customs jobs. Work closely with data engineers, data architects, and analysts to ensure metadata is accurate, relevant, and up to date. Implement role-based access controls and ensure compliance with data privacy and regulatory standards. Create detailed documentation and deliver training/workshops for internal stakeholders on using the data catalog.
Posted 1 month ago
5.0 - 10.0 years
10 - 18 Lacs
Bengaluru, Mumbai (All Areas)
Hybrid
About the Role: We are seeking a passionate and experienced Subject Matter Expert and Trainer to deliver our comprehensive Data Engineering with AWS program. This role combines deep technical expertise with the ability to coach, mentor, and empower learners to build strong capabilities in data engineering, cloud services, and modern analytics tools. If you have a strong background in data engineering and love to teach, this is your opportunity to create impact by shaping the next generation of cloud data professionals. Key Responsibilities: Deliver end-to-end training on the Data Engineering with AWS curriculum, including: - Oracle SQL and ANSI SQL - Data Warehousing Concepts, ETL & ELT - Data Modeling and Data Vault - Python programming for data engineering - AWS Fundamentals (EC2, S3, Glue, Redshift, Athena, Kinesis, etc.) - Apache Spark and Databricks - Data Ingestion, Processing, and Migration Utilities - Real-time Analytics and Compute Services (Airflow, Step Functions) Facilitate engaging sessions virtual and in-person and adapt instructional methods to suit diverse learning styles. Guide learners through hands-on labs, coding exercises, and real-world projects. Assess learner progress through evaluations, assignments, and practical assessments. Provide mentorship, resolve doubts, and inspire confidence in learners. Collaborate with the program management team to continuously improve course delivery and learner experience. Maintain up-to-date knowledge of AWS and data engineering best practices. Ideal Candidate Profile: Experience: Minimum 5-8 years in Data Engineering, Big Data, or Cloud Data Solutions. Prior experience delivering technical training or conducting workshops is strongly preferred. Technical Expertise: Proficiency in SQL, Python, and Spark. Hands-on experience with AWS services: Glue, Redshift, Athena, S3, EC2, Kinesis, and related tools. Familiarity with Databricks, Airflow, Step Functions, and modern data pipelines. Certifications: AWS certifications (e.g., AWS Certified Data Analytics Specialty) are a plus. Soft Skills: Excellent communication, facilitation, and interpersonal skills. Ability to break down complex concepts into simple, relatable examples. Strong commitment to learner success and outcomes. Email your application to: careers@edubridgeindia.in.
Posted 1 month ago
6.0 - 9.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Your Role Knowledge in Cloud Computing by using AWS Services like Glue, Lamda, Athena, Step Functions, S3 etc. Knowledge in programming language Python/Scala. Knowledge in Spark/PySpark (Core and Streaming) and hands-on to transform using Streaming. Knowledge building real time or batch ingestion and transformation pipelines. Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Your Profile Working experience and strong knowledge in Databricks is a plus. Analyze existing queries for performance improvements. Develop procedures and scripts for data migration. Provide timely scheduled management reporting. Investigate exceptions regarding asset movements. What will you love working at Capgemini Were committed to ensure that people of all backgrounds feel encouraged and have a sense of belonging at Capgemini. You are valued for who you are, and you canbring your original self to work . Every Monday, kick off the week with a musical performance by our in-house band - The Rubber Band. Also get to participate in internalsports events , yoga challenges, or marathons. Capgemini serves clients across industries, so you may get to work on varied data engineering projects involving real-time data pipelines, big data processing, and analytics. You'll work extensively with AWS services like S3, Redshift, Glue, Lambda, and more.
Posted 1 month ago
7.0 - 8.0 years
9 - 10 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, PySpark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform - Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale . Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model:: Direct placement with client This is remote role Shift timings::10 AM to 7 PM
Posted 1 month ago
4.0 - 6.0 years
9 - 13 Lacs
Solapur
Work from Office
Role Overview: EssentiallySports is seeking a Growth Product Manager who can scale our web platform's reach, engagement, and impact. This is not a traditional marketing roleyour job is to engineer growth through product innovation, user journey optimization, and experimentation. Youll be the bridge between editorial, tech, and analyticsturning insights into actions that drive sustainable audience and revenue growth. Key ResponsibilitiesOwn the entire web user journey from page discovery to conversion to retention. Identify product-led growth opportunities using scroll depth, CTRs, bounce rates, and cohort behavior. Optimize high-traffic areas of the sitelanding pages, article CTAs, newsletter modulesfor conversion and time-on-page. Set up and scale A/B testing and experimentation pipelines for UI/UX, headlines, engagement surfaces, and signup flows. Collaborate with SEO and Performance Marketing teams to translate high-ranking traffic into engaged, loyal users. Partner with content and tech teams to develop recommendation engines, personalization strategies, and feedback loops. Monitor analytics pipelines from GA4 ? Athena ? dashboards to derive insights and drive decision-making. Introduce AI-driven features (LLM prompts, content auto-summaries, etc. ) that personalize or simplify the user experience. Use tools like Jupyter, Google Analytics, Glue, and others to synthesize data into growth opportunities. Who you are4+ years of experience in product growth, web engagement, or analytics-heavy roles. Deep understanding of web traffic behavior, engagement funnels, bounce/exit analysis, and retention loops. Hands-on experience running product experiments, growth sprints, and interpreting funnel analytics. Strong proficiency in SQL, GA4, marketing analytics, and campaign managementUnderstand customer segmentation, LTV analysis, cohort behavior, and user funnel optimizationThrive in ambiguity and love building things from scratchPassionate about AI, automation, and building sustainable growth enginesThinks like a founder: drives initiatives independently, hunts for insights, moves fastA team player who collaborates across engineering, growth, and editorial teams. Proactive and solution-oriented, always spotting opportunities for real growth. Thrive in a fast-moving environment, taking ownership and driving impact.
Posted 1 month ago
1.0 - 5.0 years
1 - 5 Lacs
Chennai, Coimbatore
Work from Office
Dear Candidates Greetings From Q ways Technologies We are hiring for AR Caller Hospital Billing & PB in Epic & Athena Process: Medical Billing Designation: AR Caller , Senior AR Caller Salary: As per standards Location: Chennai & Coimbatore Free Pick up and Drop Interview Mode: Virtual & Direct Should have good domain knowledge Experience in end to end RCM would be preferred more Should be flexible towards jobs and the requirements Should be a good team player Must Have exp in Epic or Athena Software Interested candidate can ping me in Whatsapp or can call directly Pls watsapp to the below given numbers. Number: 7397746782 - Maria (Ping me in Watsapp) Regards HR Team Qway Technologies RR Tower 3, 3rd Floor Guindy Industrial Estate Chennai
Posted 1 month ago
7.0 - 12.0 years
1 - 2 Lacs
Hyderabad
Remote
Role & responsibilities We are looking for a highly experienced Senior Cloud Data Engineer to lead the design, development, and optimization of our cloud-based data infrastructure. This role requires deep technical expertise in AWS services, data engineering best practices, and infrastructure automation. You will be instrumental in shaping our data architecture and enabling data-driven decision-making across the organization. Key Responsibilities: Design, build, and maintain scalable and secure data pipelines using AWS Glue , Redshift , and Python . Develop and optimize SQL queries and stored procedures for complex data transformations and migrations. Automate infrastructure provisioning and deployment using Terraform , ensuring repeatability and compliance. Architect and implement data lake and data warehouse solutions on AWS. Collaborate with cross-functional teams including data scientists, analysts, and DevOps to deliver high-quality data solutions. Monitor, troubleshoot, and optimize data workflows for performance, reliability, and cost-efficiency. Implement data quality checks, validation frameworks, and monitoring tools. Ensure data security, privacy, and compliance with industry standards and regulations. Lead code reviews, mentor junior engineers, and promote best practices in data engineering. Participate in capacity planning, cost optimization, and performance tuning of cloud data infrastructure. Evaluate and integrate new tools and technologies to improve data engineering capabilities. Document technical designs, processes, and operational procedures. Support business intelligence and analytics teams by ensuring timely and accurate data availability. Required Skills & Experience: 10+ years of experience in data engineering or cloud data architecture. Strong expertise in AWS Redshift , including schema design, performance tuning, and workload management. Proficiency in SQL and stored procedures for ETL and data migration tasks. Hands-on experience with Terraform for infrastructure as code (IaC) in AWS environments. Deep knowledge of AWS Glue for ETL orchestration and job development. Advanced programming skills in Python , especially for data processing and automation. Solid understanding of data warehousing, data lakes, and cloud-native data architectures. Preferred candidate profile AWS Certifications (e.g., AWS Certified Data Analytics Specialty, AWS Certified Solutions Architect). Experience with CI/CD pipelines and DevOps practices. Familiarity with additional AWS services like S3, Lambda, CloudWatch, Step Functions, and IAM. Knowledge of data governance, lineage, and cataloging tools (e.g., AWS Glue Data Catalog, Apache Atlas). Experience with real-time data processing frameworks (e.g., Kinesis, Kafka, Spark Streaming).
Posted 1 month ago
5.0 - 10.0 years
15 - 25 Lacs
Noida, New Delhi, Delhi / NCR
Hybrid
Build and manage data infrastructure on AWS , including S3, Glue, Lambda, Open Search, Athena, and CloudWatch using IaaC tool like Terraform Design and implement scalable ETL pipelines with integrated validation and monitoring. Set up data quality frameworks using tools like Great Expectations , integrated with PostgreSQL or AWS Glue jobs. Implement automated validation checks at key points in the data flow: post-ingest, post-transform, and pre-load. Build centralized logging and alerting pipelines (e.g., using CloudWatch Logs, Fluent bit ,SNS, File bit ,Logstash , or third-party tools). Define CI/CD processes for deploying and testing data pipelines (e.g., using Jenkins, GitHub Actions) Collaborate with developers and data engineers to enforce schema versioning, rollback strategies, and data contract enforcement. Preferred candidate profile 5+ years of experience in DataOps, DevOps, or data infrastructure roles. Proven experience with infrastructure-as-code (e.g., Terraform, CloudFormation). Proven experience with real-time data streaming platforms (e.g., Kinesis, Kafka). Proven experience building production-grade data pipelines and monitoring systems in AWS . Hands-on experience with tools like AWS Glue , S3 , Lambda , Athena , and CloudWatch . Strong knowledge of Python and scripting for automation and orchestration. Familiarity with data validation frameworks such as Great Expectations, Deequ, or dbt tests. Experience with SQL-based data systems (e.g., PostgreSQL). Understanding of security, IAM, and compliance best practices in cloud data environments.
Posted 1 month ago
4.0 - 9.0 years
10 - 15 Lacs
Pune
Work from Office
MS Azure Infra (Must), PaaS will be a plus, ensuring solutions meet regulatory standards and manage risk effectively. Hands-On Experience using Terraform to design and deploy solutions (at least 5+ years), adhering to best practices to minimize risk and ensure compliance with regulatory requirements. Primary Skill AWS Infra along with PaaS will be an added advantage. Certification in Terraform is an added advantage. Certification in Azure and AWS is an added advantage. Can handle large audiences to present HLD, LLD, and ERC. Able to drive Solutions/Projects independently and lead projects with a focus on risk management and regulatory compliance. Secondary Skills Amazon Elastic File System (EFS) Amazon Redshift Amazon S3 Apache Spark Ataccama DQ Analyzer AWS Apache Airflow AWS Athena Azure Data Factory Azure Data Lake Storage Gen2 (ADLS) Azure Databricks Azure Event Hub Azure Stream Analytics Azure Synapse Analytics BigID C++ Cloud Storage Collibra Data Governance (DG) Collibra Data Quality (DQ) Data Lake Storage Data Vault Modeling Databricks DataProc DDI Dimensional Data Modeling EDC AXON Electronic Medical Record (EMR) Extract, Transform & Load (ETL) Financial Services Logical Data Model (FSLDM) Google Cloud Platform (GCP) BigQuery Google Cloud Platform (GCP) Bigtable Google Cloud Platform (GCP) Dataproc HQL IBM InfoSphere Information Analyzer IBM Master Data Management (MDM) Informatica Data Explorer Informatica Data Quality (IDQ) Informatica Intelligent Data Management Cloud (IDMC) Informatica Intelligent MDM SaaS Inmon methodology Java Kimball Methodology Metadata Encoding & Transmission Standards (METS) Metasploit Microsoft Excel Microsoft Power BI NewSQL noSQL OpenRefine OpenVAS Performance Tuning Python R RDD Optimization SaS SQL Tableau Tenable Nessus TIBCO Clarity
Posted 1 month ago
5.0 - 9.0 years
8 - 12 Lacs
Noida
Work from Office
5-9 years In Data Engineering, software development such as ELT/ETL, data extraction and manipulation in Data Lake/Data Warehouse environment Expert level Hands to the following: Python, SQL PySpark DBT and Apache Airflow DevOps, Jenkins, CI/CD Data Governance and Data Quality frameworks Data Lakes, Data Warehouse AWS services including S3, SNS, SQS, Lambda, EMR, Glue, Athena, EC2, VPC etc. Source code control - GitHub, VSTS etc. Mandatory Competencies Python - Python Database - SQL Data on Cloud - AWS S3 DevOps - CI/CD DevOps - Github ETL - AWS Glue Beh - Communication
Posted 1 month ago
6.0 - 11.0 years
20 - 35 Lacs
Gurugram
Hybrid
Must-Have Skills (Core Requirements) Look for resumes that mention hands-on experience with: Amazon S3 storing and organizing data AWS Glue – running ETL jobs (basic PySpark knowledge is a plus) Glue Catalog – maintaining metadata for datasets Amazon Athena – querying data using SQL Parquet or CSV – basic familiarity with data file formats AWS Lambda – for simple automation or triggers Basic IAM knowledge – setting up access permissions CloudWatch – monitoring jobs or logs Understanding of ETL/ELT pipelines Good-to-Have Skills (Preferred but not mandatory) These add value, but are not essential at this level: AWS Lake Formation – access control and permissions Apache Airflow or Step Functions – workflow orchestration Amazon Redshift – experience with data warehouse usage AWS DMS or Kinesis – for data ingestion Terraform or CloudFormation – for infrastructure setup Exposure to QuickSight or any dashboarding tools
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Mumbai
Work from Office
We are seeking a skilled Python Developer with expertise in Django, Flask, and API development to join our growing team. The Python Developer will be responsible for designing and implementing backend services, APIs, and integrations that power our core platform. The ideal candidate should have a strong foundation in Python programming, experience with Django and/or Flask frameworks, and a proven track record of delivering robust and scalable solutions. Responsibilities: Design, develop, and maintain backend services and APIs using Python frameworks such as Django and Flask. Collaborate with front-end developers, product managers, and stakeholders to translate business requirements into technical solutions. Build and integrate RESTful APIs for seamless communication between our applications and external services. Qualifications: Bachelors degree in computer science, Engineering, or related field; or equivalent experience. 5+ years of professional experience as a Python Developer, with a focus on backend development. Secondary Skill Amazon Elastic File System (EFS) Amazon Redshift Amazon S3 Apache Spark Ataccama DQ Analyzer AWS Apache Airflow AWS Athena Azure Data Factory Azure Data Lake Storage Gen2 (ADLS) Azure Databricks Azure Event Hub Azure Stream Analytics Azure Synapse Analytics BigID C++ Cloud Storage Collibra Data Governance (DG) Collibra Data Quality (DQ) Data Lake Storage Data Vault Modeling Databricks DataProc DDI Dimensional Data Modeling EDC AXON Electronic Medical Record (EMR) Extract, Transform & Load (ETL) Financial Services Logical Data Model (FSLDM) Google Cloud Platform (GCP) BigQuery Google Cloud Platform (GCP) Bigtable Google Cloud Platform (GCP) Dataproc HQL IBM InfoSphere Information Analyzer IBM Master Data Management (MDM) Informatica Data Explorer Informatica Data Quality (IDQ) Informatica Intelligent Data Management Cloud (IDMC) Informatica Intelligent MDM SaaS Inmon methodology Java Kimball Methodology Metadata Encoding & Transmission Standards (METS) Metasploit Microsoft Excel Microsoft Power BI NewSQL noSQL OpenRefine OpenVAS Performance Tuning Python R RDD Optimization SaS SQL Tableau Tenable Nessus TIBCO Clarity
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough