Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 4.0 years
3 - 7 Lacs
Bikaner
Work from Office
The role is for a data engineer with growth and business acumen, in the permissionless growth team. Someone who can connect the pipelines of millions of users, but at the same time knit a story of the how and why. Responsibilities Owning Data Pipeline from Web to Athena to Email, end-to-endYoull make the key decisions and see them through to successful user sign upUse Data Science to find real insights, which translates to user engagementPushing changes every week dayPersonalization at Scale: Leverage fan behavior data to tailor content and improve lifetime value. Who are you? 2+ years of professional data engineering experienceSomeone who spends time thinking about business insights as much as they do on engineeringIs a self-starter, and drives initiativesIs excited to pick up AI, and integrate it at various touch pointsYou have strong experience in data analysis, growth marketing, or audience development (media or newsletters Even better). Have an awareness about Athena, Glue, Jupyter, or intent to pick them upYoure comfortable working with tools like Google Analytics, SQL, email marketing platforms (Beehiiv is a plus), and data visualization tools. Collaborative and want to see the team succeed in its goalsProblem solving, proactive and solution oriented mindset, to spot opportunities and translate into real growthAbility to thrive in startups with a fast-paced environment and take ownership for working through ambiguityExcited to join a lean team in a big company that moves quickly
Posted 1 week ago
2.0 - 4.0 years
3 - 7 Lacs
Aligarh
Work from Office
The role is for a data engineer with growth and business acumen, in the permissionless growth team. Someone who can connect the pipelines of millions of users, but at the same time knit a story of the how and why. Responsibilities Owning Data Pipeline from Web to Athena to Email, end-to-endYoull make the key decisions and see them through to successful user sign upUse Data Science to find real insights, which translates to user engagementPushing changes every week dayPersonalization at Scale: Leverage fan behavior data to tailor content and improve lifetime value. Who are you? 2+ years of professional data engineering experienceSomeone who spends time thinking about business insights as much as they do on engineeringIs a self-starter, and drives initiativesIs excited to pick up AI, and integrate it at various touch pointsYou have strong experience in data analysis, growth marketing, or audience development (media or newsletters? Even better). Have an awareness about Athena, Glue, Jupyter, or intent to pick them upYoure comfortable working with tools like Google Analytics, SQL, email marketing platforms (Beehiiv is a plus), and data visualization tools. Collaborative and want to see the team succeed in its goalsProblem solving, proactive and solution oriented mindset, to spot opportunities and translate into real growthAbility to thrive in startups with a fast-paced environment and take ownership for working through ambiguityExcited to join a lean team in a big company that moves quickly
Posted 1 week ago
2.0 - 4.0 years
3 - 7 Lacs
Shimoga
Work from Office
Full Time Role at EssentiallySports for Data Growth EngineerEssentiallySports is the home for the underserved fan, delivering storytelling that goes beyond the headlines. As a media platform, we combine deep audience insights with cultural trends, to meet fandom where it lives and where it goes next. ValuesFocus on the user and all else will followHire for Intent and not for ExperienceBootstrapping gives you the freedom to serve the customer and the team instead of investorInternet and Technology untap the nichesAction oriented, integrity, freedom, strong communicators, and responsibilityAll things equal, one with high agency winsEssentiallySports is a top 10 sports media platform in the U. S. , generating over a billion pageviews a year and 30m+ monthly active users per month. This massive traffic fuels our data-driven culture, allowing us to build owned audiences at scale through organic growtha model we take pride in, with zero CAC. The next phase of ES growth is around newsletter initiative, in less than 9 months, weve built a robust newsletter brand with 700,000+ highly engaged readers and impressive performance metrics:5 newsletter brands700k+ subscribersOpen rates of 40%-46%. The role is for a data engineer with growth and business acumen, in the permissionless growth team. Someone who can connect the pipelines of millions of users, but at the same time knit a story of the how and why. ResponsibilitiesOwning Data Pipeline from Web to Athena to Email, end-to-endYoull make the key decisions and see them through to successful user sign upUse Data Science to find real insights, which translates to user engagementPushing changes every week dayPersonalization at Scale: Leverage fan behavior data to tailor content and improve lifetime value. Who are you?2+ years of professional data engineering experienceSomeone who spends time thinking about business insights as much as they do on engineeringIs a self-starter, and drives initiativesIs excited to pick up AI, and integrate it at various touch pointsYou have strong experience in data analysis, growth marketing, or audience development (media or newsletters? Even better). Have an awareness about Athena, Glue, Jupyter, or intent to pick them upYoure comfortable working with tools like Google Analytics, SQL, email marketing platforms (Beehiiv is a plus), and data visualization tools. Collaborative and want to see the team succeed in its goalsProblem solving, proactive and solution oriented mindset, to spot opportunities and translate into real growthAbility to thrive in startups with a fast-paced environment and take ownership for working through ambiguityExcited to join a lean team in a big company that moves quickly
Posted 1 week ago
4.0 - 6.0 years
9 - 13 Lacs
Solapur
Work from Office
Role Overview: EssentiallySports is seeking a Growth Product Manager who can scale our web platform's reach, engagement, and impact. This is not a traditional marketing roleyour job is to engineer growth through product innovation, user journey optimization, and experimentation. Youll be the bridge between editorial, tech, and analyticsturning insights into actions that drive sustainable audience and revenue growth. Key Responsibilities Own the entire web user journey from page discovery to conversion to retention. Identify product-led growth opportunities using scroll depth, CTRs, bounce rates, and cohort behavior. Optimize high-traffic areas of the sitelanding pages, article CTAs, newsletter modulesfor conversion and time-on-page. Set up and scale A/B testing and experimentation pipelines for UI/UX, headlines, engagement surfaces, and signup flows. Collaborate with SEO and Performance Marketing teams to translate high-ranking traffic into engaged, loyal users. Partner with content and tech teams to develop recommendation engines, personalization strategies, and feedback loops. Monitor analytics pipelines from GA4 ? Athena ? dashboards to derive insights and drive decision-making. Introduce AI-driven features (LLM prompts, content auto-summaries, etc. ) that personalize or simplify the user experience. Use tools like Jupyter, Google Analytics, Glue, and others to synthesize data into growth opportunities. Who you are 4+ years of experience in product growth, web engagement, or analytics-heavy roles. Deep understanding of web traffic behavior, engagement funnels, bounce/exit analysis, and retention loops. Hands-on experience running product experiments, growth sprints, and interpreting funnel analytics. Strong proficiency in SQL, GA4, marketing analytics, and campaign management Understand customer segmentation, LTV analysis, cohort behavior, and user funnel optimization Thrive in ambiguity and love building things from scratch Passionate about AI, automation, and building sustainable growth enginesThinks like a founder: drives initiatives independently, hunts for insights, moves fast A team player who collaborates across engineering, growth, and editorial teams. Proactive and solution-oriented, always spotting opportunities for real growth. Thrive in a fast-moving environment, taking ownership and driving impact.
Posted 1 week ago
4.0 - 6.0 years
6 - 10 Lacs
Tamil Nadu
Work from Office
Introduction to the Role: Are you passionate about unlocking the power of data to drive innovation and transform business outcomes? Join our cutting-edge Data Engineering team and be a key player in delivering scalable, secure, and high-performing data solutions across the enterprise. As a Data Engineer , you will play a central role in designing and developing modern data pipelines and platforms that support data-driven decision-making and AI-powered products. With a focus on Python , SQL , AWS , PySpark , and Databricks , you'll enable the transformation of raw data into valuable insights by applying engineering best practices in a cloud-first environment. We are looking for a highly motivated professional who can work across teams to build and manage robust, efficient, and secure data ecosystems that support both analytical and operational workloads. Accountabilities: Design, build, and optimize scalable data pipelines using PySpark , Databricks , and SQL on AWS cloud platforms . Collaborate with data analysts, data scientists, and business users to understand data requirements and ensure reliable, high-quality data delivery. Implement batch and streaming data ingestion frameworks from a variety of sources (structured, semi-structured, and unstructured data). Develop reusable, parameterized ETL/ELT components and data ingestion frameworks. Perform data transformation, cleansing, validation, and enrichment using Python and PySpark . Build and maintain data models, data marts, and logical/physical data structures that support BI, analytics, and AI initiatives. Apply best practices in software engineering, version control (Git), code reviews, and agile development processes. Ensure data pipelines are well-tested, monitored, and robust with proper logging and alerting mechanisms. Optimize performance of distributed data processing workflows and large datasets. Leverage AWS services (such as S3, Glue, Lambda, EMR, Redshift, Athena) for data orchestration and lakehouse architecture design. Participate in data governance practices and ensure compliance with data privacy, security, and quality standards. Contribute to documentation of processes, workflows, metadata, and lineage using tools such as Data Catalogs or Collibra (if applicable). Drive continuous improvement in engineering practices, tools, and automation to increase productivity and delivery quality. Essential Skills / Experience: 4 to 6 years of professional experience in Data Engineering or a related field. Strong programming experience with Python and experience using Python for data wrangling, pipeline automation, and scripting. Deep expertise in writing complex and optimized SQL queries on large-scale datasets. Solid hands-on experience with PySpark and distributed data processing frameworks. Expertise working with Databricks for developing and orchestrating data pipelines. Experience with AWS cloud services such as S3 , Glue , EMR , Athena , Redshift , and Lambda . Practical understanding of ETL/ELT development patterns and data modeling principles (Star/Snowflake schemas). Experience with job orchestration tools like Airflow , Databricks Jobs , or AWS Step Functions . Understanding of data lake, lakehouse, and data warehouse architectures. Familiarity with DevOps and CI/CD tools for code deployment (e.g., Git, Jenkins, GitHub Actions). Strong troubleshooting and performance optimization skills in large-scale data processing environments. Excellent communication and collaboration skills, with the ability to work in cross-functional agile teams. Desirable Skills / Experience: AWS or Databricks certifications (e.g., AWS Certified Data Analytics, Databricks Data Engineer Associate/Professional). Exposure to data observability , monitoring , and alerting frameworks (e.g., Monte Carlo, Datadog, CloudWatch). Experience working in healthcare, life sciences, finance, or another regulated industry. Familiarity with data governance and compliance standards (GDPR, HIPAA, etc.). Knowledge of modern data architectures (Data Mesh, Data Fabric). Exposure to streaming data tools like Kafka, Kinesis, or Spark Structured Streaming. Experience with data visualization tools such as Power BI, Tableau, or QuickSight.
Posted 1 week ago
8.0 - 13.0 years
16 - 22 Lacs
Hyderabad
Work from Office
Looking for a Data Engineer with 8+ yrs exp to build scalable data pipelines on AWS/Azure, work with Big Data tools (Spark, Kafka), and support analytics teams. Must have strong coding skills in Python/Java and exp with SQL/NoSQL & cloud platforms. Required Candidate profile Strong experience in Java/Scala/Python. Worked with big data tech: Spark, Kafka, Flink, etc. Built real-time & batch data pipelines. Cloud: AWS, Azure, or GCP.
Posted 1 week ago
2.0 - 7.0 years
3 - 6 Lacs
Bengaluru
Work from Office
Looking for an AWS & DevOps trainer to take 1-hour daily virtual classes (MonFri). Should cover AWS services, DevOps tools (Jenkins, Docker, K8s, etc.), give hands-on tasks, guide on interviews & certs, and support doubt sessions.
Posted 1 week ago
7.0 - 9.0 years
7 - 9 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities: 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Required Qualifications: 1. bachelor's degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration 9. Familiarity with data modeling and schema design 10. Knowledge of data security and compliance requirements 11. Excellent problem-solving and analytical skills 12. Strong communication and collaboration abilities Preferred Qualifications: 1. AWS Certified Data Analytics - Specialty 2. AWS Certified Solutions Architect - Associate or Professional 3. Experience with real-time data processing using Kinesis or Kafka 4. Knowledge of machine learning workflows on AWS (e.g., SageMaker) 5. Familiarity with containerization technologies (Docker, Kubernetes) 6. Experience with CI/CD pipelines and infrastructure-as-code (e.g., CloudFormation, Terraform) Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB, Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You will be working as a Business Intelligence Engineer III in Pune on a 6-month Contract basis with the TekWissen organization. Your primary responsibility will be to work on Data Engineering on AWS, including designing and implementing scalable data pipelines using AWS services such as S3, AWS Glue, Redshift, and Athena. You will also focus on Data Modeling and Transformation by developing and optimizing dimensional data models to support various business intelligence and analytics use cases. Additionally, you will collaborate with stakeholders to understand reporting and analytics requirements and build interactive dashboards and reports using visualization tools like the client's QuickSight. Your role will also involve implementing data quality checks and monitoring processes to ensure data integrity and reliability. You will be responsible for managing and maintaining the AWS infrastructure required for the data and analytics platform, optimizing performance, cost, and security of the underlying cloud resources. Collaboration with cross-functional teams and sharing knowledge and best practices will be essential for identifying data-driven insights. As a successful candidate, you should have at least 3 years of experience as a Business Intelligence Engineer or Data Engineer, with a strong focus on AWS cloud technologies. Proficiency in designing and implementing data pipelines using AWS services like S3, Glue, Redshift, Athena, and Lambda is mandatory. You should also possess expertise in data modeling, dimensional modeling, data transformation techniques, and experience in deploying business intelligence solutions using tools like QuickSight and Tableau. Strong SQL and Python programming skills are required for data processing and analysis. Knowledge of cloud architecture patterns, security best practices, and cost optimization on AWS is crucial. Excellent communication and collaboration skills are necessary to effectively work with cross-functional teams. Hands-on experience with Apache Spark, Airflow, or other big data technologies, as well as familiarity with AWS DevOps practices and tools, agile software development methodologies, and AWS certifications, will be considered as preferred skills. The position requires a candidate with a graduate degree and TekWissen Group is an equal opportunity employer supporting workforce diversity.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
You have extensive experience in analytics and large-scale data processing across diverse data platforms and tools. Your responsibilities will include managing data storage and transformation across AWS S3, DynamoDB, Postgres, and Delta Tables with efficient schema design and partitioning. You will develop scalable analytics solutions using Athena and automate workflows with proper monitoring and error handling. Ensuring data quality, access control, and compliance through robust validation, logging, and governance practices will be a crucial part of your role. Additionally, you will design and maintain data pipelines using Python, Spark, Delta Lake framework, AWS Step functions, Event Bridge, AppFlow, and OAUTH. The tech stack you will be working with includes S3, Postgres, DynamoDB, Tableau, Python, and Spark.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You are a seasoned Data Engineer with expertise in SQL, Python, and AWS, and you are responsible for designing and managing data solutions. Your role involves understanding and analyzing client business requirements, recommending modern data tools, developing and maintaining data pipelines and ETL processes, creating and optimizing data models for client reporting and analytics, ensuring seamless data integration and visualization, communicating with clients for updates and issue resolution, and staying updated on industry best practices and emerging technologies. You should have 3-5 years of experience in data engineering/analytics, proficiency in SQL and Python for data manipulation and analysis, knowledge of Pyspark, experience with data warehouse platforms like Redshift and Google BigQuery, familiarity with AWS services like S3, Glue, Athena, proficiency in Airflow, familiarity with event tracking platforms like GA or Amplitude, strong problem-solving skills, adaptability, excellent communication skills, proactive client engagement, and the ability to collaborate effectively with team members and clients. In return, you will receive a competitive salary with a bonus, employee discounts across all brands, medical and health insurance, a collaborative work environment, and a good vibes work culture.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a talented Big Data Engineer, you will be responsible for developing and managing our company's Big Data solutions. Your role will involve designing and implementing Big Data tools and frameworks, implementing ELT processes, collaborating with development teams, building cloud platforms, and maintaining the production system. To excel in this position, you should possess in-depth knowledge of Hadoop technologies, exceptional project management skills, and advanced problem-solving abilities. A successful Big Data Engineer comprehends the company's needs and establishes scalable data solutions to meet current and future requirements effectively. Your responsibilities will include meeting with managers to assess the company's Big Data requirements, developing solutions on AWS utilizing tools like Apache Spark, Databricks, Delta Tables, EMR, Athena, Glue, and Hadoop. You will also be involved in loading disparate data sets, conducting pre-processing services using tools such as Athena, Glue, and Spark, collaborating with software research and development teams, building cloud platforms for application development, and ensuring the maintenance of production systems. The requirements for this role include a minimum of 5 years of experience as a Big Data Engineer, proficiency in Python & PySpark, expertise in Hadoop, Apache Spark, Databricks, Delta Tables, and AWS data analytics services. Additionally, you should have extensive experience with Delta Tables, JSON, Parquet file formats, familiarity with AWS data analytics services like Athena, Glue, Redshift, EMR, knowledge of Data warehousing, NoSQL, and RDBMS databases. Good communication skills and the ability to solve complex data processing and transformation-related problems are essential for success in this role.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
As an experienced Software/Data Engineer with a passion for creating meaningful solutions, you will be joining a global team of innovators at a Siemens company. In this role, you will be responsible for developing data integration solutions using Java, Scala, and/or Python, with a focus on data and Business Intelligence (BI). Your primary responsibilities will include building data pipelines, data transformation, and data modeling to support various integration methods and information delivery techniques. To excel in this position, you should have a Bachelor's degree in an Engineering or Science discipline or equivalent experience, along with at least 5 years of software/data engineering experience. Additionally, you should have a minimum of 3 years of experience in a data and BI focused role. Proficiency in data integration development using languages such as Python, PySpark, and SparkSQL, as well as experience with relational databases and SQL optimization, are essential for this role. Experience with AWS-based data services technologies (e.g., Glue, RDS, Athena) and Snowflake CDW, along with familiarity with BI tools like PowerBI, will be beneficial. Your willingness to experiment with new technologies and adapt to agile development practices will be key to your success in this role. Join us in creating a brighter future where smarter infrastructure protects the environment and connects us all. Our culture is built on collaboration, support, and a commitment to helping each other grow both personally and professionally. If you are looking to make a positive impact and contribute to a more sustainable world, we invite you to explore how far your passion can take you with us.,
Posted 1 week ago
7.0 - 12.0 years
35 - 50 Lacs
Hyderabad
Work from Office
Job Description: Spark, Java Strong SQL writing skills, data discovery, data profiling, Data exploration, Data wrangling skills Kafka, AWS s3, lake formation, Athena, glue, Autosys or similar tools, FastAPI (secondary) Strong SQL skills to support data analysis and imbedded business logic in SQL, data profiling and gap assessment Collaborate with development and business SMEs within technology to understand data requirements, perform data analysis to support and Validate business logic, data integrity and data quality rules within a centralized data platform Experience working within the banking/financial services industry with solid understanding of financial products and business processes
Posted 1 week ago
4.0 - 8.0 years
14 - 24 Lacs
Hyderabad
Work from Office
Experience Required : Minimum 4.5+ years Job Summary: We are seeking a skilled Data Engineer with a strong background in data ingestion, processing, and storage. The ideal candidate will have experience working with various data sources and technologies, particularly in a cloud environment. You will be responsible for designing and implementing data pipelines, ensuring data quality, and optimizing data storage solutions. Key Responsibilities: Design, develop, and maintain scalable data pipelines for data ingestion and processing using Python, Spark, and AWS services. Work with on-prem Oracle databases, batch files, and Confluent Kafka for data sourcing. Implement and manage ETL processes using AWS Glue and EMR for batch and streaming data. Develop and maintain data storage solutions using Medallion Architecture in S3, Redshift, and Oracle. Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Monitor and optimize data workflows using Airflow and other orchestration tools. Ensure data quality and integrity throughout the data lifecycle. Implement CI/CD practices for data pipeline deployment using Terraform and other tools. Utilize monitoring and logging tools such as CloudWatch, Datadog, and Splunk to ensure system reliability and performance. Communicate effectively with stakeholders to gather requirements and provide updates on project status. Technical Skills Required: Proficient in Python for data processing and automation. Strong experience with Apache Spark for large-scale data processing. Familiarity with AWS S3 for data storage and management. Experience with Kafka for real-time data streaming. Knowledge of Redshift for data warehousing solutions. Proficient in Oracle databases for data management. Experience with AWS Glue for ETL processes. Familiarity with Apache Airflow for workflow orchestration. Experience with EMR for big data processing. Mandatory: Strong AWS data engineering skills.
Posted 1 week ago
5.0 - 7.0 years
15 - 30 Lacs
Gurugram
Remote
Design, develop, and maintain robust data pipelines and ETL/ELT processes on AWS. Leverage AWS services such as S3, Glue, Lambda, Redshift, Athena, EMR , and others to build scalable data solutions. Write efficient and reusable code using Python for data ingestion, transformation, and automation tasks. Collaborate with cross-functional teams including data analysts, data scientists, and software engineers to support data needs. Monitor, troubleshoot, and optimize data workflows for performance, reliability, and cost efficiency. Ensure data quality, security, and governance across all systems. Communicate technical solutions clearly and effectively with both technical and non-technical stakeholders. Required Skills & Qualifications 5+ years of experience in data engineering roles. Strong hands-on experience with Amazon Web Services (AWS) , particularly in data-related services (e.g., S3, Glue, Lambda, Redshift, EMR, Athena). Proficiency in Python for scripting and data processing. Experience with SQL and working with relational databases. Solid understanding of data architecture, data modeling, and data warehousing concepts. Experience with CI/CD pipelines and version control tools (e.g., Git). Excellent verbal and written communication skills . Proven ability to work independently in a fully remote environment. Preferred Qualifications Experience with workflow orchestration tools like Apache Airflow or AWS Step Functions. Familiarity with big data technologies such as Apache Spark or Hadoop. Exposure to infrastructure-as-code tools like Terraform or CloudFormation. Knowledge of data privacy and compliance standards.
Posted 1 week ago
5.0 - 10.0 years
7 - 11 Lacs
Noida
Work from Office
Expert inPython(5+ years) Expert in Django or any similar frameworks (5+ years) Experience (2+ years) inTypeScript, JavaScript and JS frameworks (Angular > 2 with Angular Material) Good knowledge of RDMS (preferably Postgres) Experience and sound knowledge of AWS services (ECS, lambda, deployment pipelines etc). Excellent written and verbal communication skills. Very good analytical and problem-solving skills. Ability to pick up new technologies. Write clean, maintainable and efficient code. Willingness to learn and understand the business domain. Mandatory Competencies Programming Language - Python - Django Beh - Communication User Interface - Typescript - Typescript User Interface - JavaScript - JavaScript Cloud - AWS - ECS Cloud - AWS - AWS Lambda,AWS EventBridge, AWS Fargate
Posted 1 week ago
3.0 - 8.0 years
6 - 10 Lacs
Gurugram
Work from Office
Understands the process flow and the impact on the project module outcome. Works on coding assignments for specific technologies basis the project requirements and documentation available Debugs basic software components and identifies code defects. Focusses on building depth in project specific technologies. Expected to develop domain knowledge along with technical skills. Effectively communicate with team members, project managers and clients, as required. A proven high-performer and team-player, with the ability to take the lead on projects. Design and create S3 buckets and folder structures (raw, cleansed_data, output, script, temp-dir, spark-ui) Develop AWS Lambda functions (Python/Boto3) to download Bhav Copy via REST API and ingest into S3 Author and maintain AWS Glue Spark jobs to: partition data by scrip, year and month convert CSV to Parquet with Snappy compression Configure and run AWS Glue Crawlers to populate the Glue Data Catalog Write and optimize AWS Athena SQL queries to generate business-ready datasets Monitor, troubleshoot and tune data workflows for cost and performance Document architecture, code and operational runbooks Collaborate with analytics and downstream teams to understand requirements and deliver SLAs Technical Skills 3+ years hands-on experience with AWS data services (S3, Lambda, Glue, Athena) PostgreSQL basics Proficient in SQL and data partitioning strategies Experience with Parquet file formats and compression techniques (Snappy) Ability to configure Glue Crawlers and manage the AWS Glue Data Catalog Understanding of serverless architecture and best practices in security, encryption and cost control Good documentation, communication and problem-solving skills Nice-to-have skills Qualifications Qualifications 3-5 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 1 week ago
8.0 - 12.0 years
30 - 35 Lacs
Chennai
Work from Office
Technical Skills Experience building data transformation pipelines using DBT and SSIS Moderate programming experience with Python Moderate experience with AWS Glue Strong experience with SQL and ability to write efficient code and manage it through GIT repositories Nice-to-have skills Experience working with SSIS Experience working in a Wealth management industry Experience in agile development methodologies
Posted 1 week ago
4.0 - 6.0 years
6 - 10 Lacs
Gurugram
Work from Office
Role Description As a Senior Cloud Data Platform (AWS) Specialist at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on AWS Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 1 week ago
5.0 - 6.0 years
12 - 16 Lacs
Thiruvananthapuram
Remote
Build & manage infra for data strage,process & Analysis Exp in AWS Cloud Services (Glue, Lambda, Athena, Lakehouse) AWS CDK for Infrastructure-as-Code (IaC) with typescript Skills in Python, Pyspark, Spark SQL, Typescript Required Candidate profile 5 to 6 Years Data pipeline development & orchestration using AWS Glue Leadership experience UK Clients, Work timings will be aligned with the client's requirements and may follow UK time zones
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
As an AWS Data Engineer at Sufalam Technologies, located in Ahmedabad, India, you will be responsible for designing and implementing data engineering solutions on AWS. Your role will involve developing data models, managing ETL processes, and ensuring the efficient operation of data warehousing solutions. Collaboration with Finance, Data Science, and Product teams is crucial to understand reconciliation needs and ensure timely data delivery. Your expertise will contribute to data analytics activities supporting business decision-making and strategic goals. Key responsibilities include designing and implementing scalable and secure ETL/ELT pipelines for processing financial data. Collaborating closely with various teams to understand reconciliation needs and ensuring timely data delivery. Implementing monitoring and alerting for pipeline health and data quality, maintaining detailed documentation on data flows, models, and reconciliation logic, and ensuring compliance with financial data handling and audit standards. To excel in this role, you should have 5-6 years of experience in data engineering with a strong focus on AWS data services. Hands-on experience with AWS Glue, Lambda, S3, Redshift, Athena, Step Functions, Lake Formation, and IAM is essential for secure data governance. A solid understanding of data reconciliation processes in the finance domain, strong SQL skills, experience with data warehousing and data lakes, and proficiency in Python or PySpark for data transformation are required. Knowledge of financial accounting principles or experience working with financial datasets (AR, AP, General Ledger, etc.) would be beneficial.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
noida, uttar pradesh
On-site
You will be responsible for building the most personalized and intelligent news experiences for India's next 750 million digital users. As Our Principal Data Engineer, your main tasks will include designing and maintaining data infrastructure to power personalization systems and analytics platforms. This involves ensuring seamless data flow from source to consumption, architecting scalable data pipelines to process massive volumes of user interaction and content data, and developing robust ETL processes for large-scale transformations and analytical processing. You will also be involved in creating and maintaining data lakes/warehouses that consolidate data from multiple sources, optimized for ML model consumption and business intelligence. Additionally, you will implement data governance practices and collaborate with the ML team to ensure the right data availability for recommendation systems. To excel in this role, you should have a Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field, along with 8-12 years of data engineering experience, including at least 3 years in a senior role. You must possess expert-level SQL skills and have strong experience in the Apache Spark ecosystem (Spark SQL, Streaming, SparkML), as well as proficiency in Python/Scala. Experience with the AWS data ecosystem (RedShift, S3, Glue, EMR, Kinesis, Lambda, Athena) and ETL frameworks (Glue, Airflow) is essential. A proven track record of building large-scale data pipelines in production environments, particularly in high-traffic digital media, will be advantageous. Excellent communication skills are also required, as you will need to collaborate effectively across teams in a fast-paced environment that demands engineering agility.,
Posted 2 weeks ago
3.0 - 8.0 years
6 - 10 Lacs
Gurugram
Work from Office
Role Description Understands the process flow and the impact on the project module outcome. Works on coding assignments for specific technologies basis the project requirements and documentation available Debugs basic software components and identifies code defects. Focusses on building depth in project specific technologies. Expected to develop domain knowledge along with technical skills. Effectively communicate with team members, project managers and clients, as required. A proven high-performer and team-player, with the ability to take the lead on projects. Design and create S3 buckets and folder structures (raw, cleansed_data, output, script, temp-dir, spark-ui) Develop AWS Lambda functions (Python/Boto3) to download Bhav Copy via REST API and ingest into S3 Author and maintain AWS Glue Spark jobs to: partition data by scrip, year and month convert CSV to Parquet with Snappy compression Configure and run AWS Glue Crawlers to populate the Glue Data Catalog Write and optimize AWS Athena SQL queries to generate business-ready datasets Monitor, troubleshoot and tune data workflows for cost and performance Document architecture, code and operational runbooks Collaborate with analytics and downstream teams to understand requirements and deliver SLAs Technical Skills 3+ years hands-on experience with AWS data services (S3, Lambda, Glue, Athena) PostgreSQL basics Proficient in SQL and data partitioning strategies Experience with Parquet file formats and compression techniques (Snappy) Ability to configure Glue Crawlers and manage the AWS Glue Data Catalog Understanding of serverless architecture and best practices in security, encryption and cost control Good documentation, communication and problem-solving skills Nice-to-have skills Qualifications Qualifications 3-5 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 2 weeks ago
2.0 - 4.0 years
3 - 3 Lacs
Kochi
Work from Office
Job Description Experienced in AR calling, Denial Management, checking eligibility and Authorization verification. Must be familiar with MDLand and Athena . Calling Insurance companies on behalf of physicians and carryout further examination on outstanding Accounts Receivables. Prioritize unpaid claims for calling according to the length of time it has been outstanding. Call insurance companies directly and convince them to pay the outstanding claims. Check the relevance of insurance info offered by the patient. Evaluate unpaid insurance claims. Call insurance companies and check on the status of claims and verifying authorization. Transfer the outstanding balance to the patient of he/she doesnt have adequate insurance coverage. If the claim has already been paid, ask the insurance company for Explanation of Benefits (EOB). Make corrections to the claim based on inputs from the insurance company. Good organizational skills to implement timely follow-up. Willingness to work in night shifts and weekends. Excellent verbal and written communication skills. Strong reporting skills. Ability to follow established work schedule. Ability to follow instructions precisely.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough