Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
5 - 15 Lacs
Gurugram, Bengaluru, Mumbai (All Areas)
Hybrid
Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : AWS BigData Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Software Development Lead, you will be responsible for developing and configuring software systems either end-to-end or for a specific stage of the product lifecycle. You will apply your knowledge of technologies, applications, methodologies, processes, and tools to support clients, projects, or entities. Your typical day will involve collaborating with the team, making team decisions, engaging with multiple teams, and providing solutions to problems for your immediate team and across multiple teams. You will also contribute to key decisions and ensure the successful execution of projects. Roles & Responsibilities: - AWS EMR, Glue, S3, Python/PySpark - Resource must have SQL experience - Expected to be an SME - Collaborate and manage the team to perform - Responsible for team decisions - Engage with multiple teams and contribute on key decisions - Provide solutions to problems for their immediate team and across multiple teams - Lead the development and configuration of software systems - Ensure the successful execution of projects - Contribute to the improvement of processes and methodologies Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS BigData - Strong understanding of statistical analysis and machine learning algorithms - Experience with data visualization tools such as Tableau or Power BI - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity
Posted 1 month ago
8.0 - 13.0 years
8 - 18 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role: ETL Lead Developer 8+ Years Location: Hyderabad Mandatory Skills ETL - Datawarehouse concepts AWS, Glue SQL python SNOWFLAKE CI/CD Tools (Jenkins, GitHub) JD Data Warehouse. In this role you will be part of a team working to develop solutions enabling the business to leverage data as an asset at the bank. As a Lead ETL Developer, you will be leading teams to develop, maintain and enhance code ensuring all IT SDLC processes are documented and practiced, working closely with multiple technologies teams across the enterprise. The Lead ETL Developer should have extensive knowledge of data warehousing cloud technologies. If you consider data as a strategic asset, evangelize the value of good data and insights, have a passion for learning and continuous improvement, this role is for you. Key Responsibilities: Translate requirements and data mapping documents to a technical design. Develop, enhance and maintain code following best practices and standards. Create and execute unit test plans. Support regression and system testing efforts. Debug and problem solve issues found during testing and or production. Communicate status, issues and blockers with project team. Support continuous improvement by identifying and solving opportunities. Basic Qualifications Bachelor degree or military experience in related field (preferably computer science). At least 8+ years of experience in ETL development within a Data Warehouse. Deep understanding of enterprise data warehousing best practices and standards. Strong experience in software engineering comprising of designing, developing and operating robust and highly-scalable cloud infrastructure services. Strong experience with Python/PySpark, DataStage ETL and SQL development. Proven experience in cloud infrastructure projects with hands on migration expertise on public clouds such as AWS and Azure, preferably Snowflake. Knowledge of Cybersecurity organization practices, operations, risk management processes, principles, architectural requirements, engineering and threats and vulnerabilities, including incident response methodologies. Understand Authentication & Authorization Services, Identity & Access Management. Strong communication and interpersonal skills. Strong organization skills and the ability to work independently as well as with a team. Preferred Qualifications AWS Certified Solutions Architect Associate, AWS Certified DevOps Engineer Professional and/or AWS Certified Solutions Architect Professional Experience defining future state roadmaps for data warehouse applications. Experience leading teams of developers within a project. Experience in financial services (banking) industry.
Posted 1 month ago
10.0 - 13.0 years
8 - 17 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Detailed job description - Skill Set: Looking for 10+ Y / highly experienced and deeply hands-on Data Architect to lead the design, build, and optimization of our data platforms on AWS and Databricks. This role requires a strong blend of architectural vision and direct implementation expertise, ensuring scalable, secure, and performant data solutions from concept to production. Strong hand on exp in data engineering/architecture, hands-on architectural and implementation experience on AWS and Databricks, Schema modeling . AWS: Deep hands-on expertise with key AWS data services and infrastructure. Databricks: Expert-level hands-on development with Databricks (Spark SQL, PySpark), Delta Lake, and Unity Catalog. Coding: Exceptional proficiency in Python , Pyspark , Spark , AWS Services and SQL. Architectural: Strong data modeling and architectural design skills with a focus on practical implementation. Preferred: AWS/Databricks certifications, experience with streaming technologies, and other data tools. Design & Build: Lead and personally execute the design, development, and deployment of complex data architectures and pipelines on AWS (S3, Glue, Lambda, Redshift, etc.) and Databricks (PySpark/Spark SQL, Delta Lake, Unity Catalog). Databricks Expertise: Own the hands-on development, optimization, and performance tuning of Databricks jobs, clusters, and notebooks. Mandatory Skills AWS, Databricks
Posted 1 month ago
8.0 - 10.0 years
8 - 18 Lacs
Gurugram
Work from Office
Oracle PL SQL Key Responsibilities: Develop and maintain complex PL/SQL procedures, packages, triggers, functions, and views in Oracle. Hands on In PostgreSQL and AWS is must Migrate and refactor PL/SQL logic and data from Oracle to PostgreSQL. Optimize SQL queries and ensure high-performance database access across platforms. Design and implement data models, schemas, and stored procedures in PostgreSQL. Work closely with application developers, data architects, and DevOps teams to ensure seamless database integration with applications. Develop scripts and utilities to automate database tasks, backups, and monitoring using AWS services. Leverage AWS cloud services such as RDS, S3, Lambda, and Glue for data processing and storage. Participate in code reviews, performance tuning, and troubleshooting of database-related issues. Ensure data integrity, consistency, and security across environments. Interested candidate, share the Resume (madhumithak@sightspectrum.in)
Posted 1 month ago
8.0 - 13.0 years
35 - 45 Lacs
Kochi
Work from Office
Exp in Python programming for data tasks. Exp in AWS data technologies, including AWS Data Lakehouse, S3, Redshift, Athena etc and exposure to equivalent Azure technologies Exp in working with applications built on microservices architecture Required Candidate profile Extensive experience with ETL and Data Engineering tools and processes, particularly within the AWS ecosystem including AWS Glue, Athena, AWS Data Pipeline, AWS Lambda etc.
Posted 1 month ago
0.0 - 4.0 years
5 - 10 Lacs
Mumbai
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As a Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made – and your lifecycle management expertise will ensure our data remains fresh and impactful. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience •Expertise in data mining, data storage and Extract-Transform-Load (ETL) processes •Experience in data pipelines development and tooling, e.g., Glue, Databricks, Synapse, or Dataproc •Experience with both relational and NoSQL databases, PostgreSQL, DB2, MongoDB •Excellent problem-solving, analytical, and critical thinking skills •Ability to manage multiple projects simultaneously, while maintaining a high level of attention to detail •Communication Skills: Must be able to communicate with both technical and non-technical colleagues, to derive technical requirements from business needs and problems Preferred Skills and Experience •Experience working as a Data Engineer and/or in cloud modernization •Experience in Data Modelling, to create conceptual model of how data is connected and how it will be used in business processes •Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization •Cloud platform certification, e.g., AWS Certified Data Analytics– Specialty, Elastic Certified Engineer, Google CloudProfessional Data Engineer, or Microsoft Certified: Azure Data Engineer Associate •Experience working with Kafka , ElasticSearch, Kibana & maintaining data lake •Managing interfaces, monitoring for production deployment including log shipping tool. •Experience in updates, upgrade, patches, VA closure, support with industry best tools •Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 month ago
2.0 - 3.0 years
5 - 9 Lacs
Kochi
Work from Office
Job Title - + + Management Level: Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture) Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)
Posted 1 month ago
2.0 - 3.0 years
5 - 9 Lacs
Kochi
Work from Office
Job Title - + + Management Level: Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)
Posted 1 month ago
3.0 - 4.0 years
5 - 9 Lacs
Kochi
Work from Office
Job Title - + + Management Level : Location:Kochi, Coimbatore, Trivandrum Must have skills:Python, Pyspark Good to have skills:Redshift Job Summary : We are seeking a highly skilled and experienced Senior Data Engineer to join our growing Data and Analytics team. The ideal candidate will have deep expertise in Databricks and cloud data warehousing, with a proven track record of designing and building scalable data pipelines, optimizing data architectures, and enabling robust analytics capabilities. This role involves working collaboratively with cross-functional teams to ensure the organization leverages data as a strategic asset. Your responsibilities will include: Roles & Responsibilities Design, build, and maintain scalable data pipelines and ETL processes using Databricks and other modern tools. Architect, implement, and manage cloud-based data warehousing solutions on Databricks (Lakehouse Architecture) Develop and maintain optimized data lake architectures to support advanced analytics and machine learning use cases. Collaborate with stakeholders to gather requirements, design solutions, and ensure high-quality data delivery. Optimize data pipelines for performance and cost efficiency. Implement and enforce best practices for data governance, access control, security, and compliance in the cloud. Monitor and troubleshoot data pipelines to ensure reliability and accuracy. Lead and mentor junior engineers, fostering a culture of continuous learning and innovation. Excellent communication skills Ability to work independently and along with client based out of western Europe Professional & Technical Skills: Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 3-4 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering Qualification Experience:5-8 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)
Posted 1 month ago
2.0 - 3.0 years
4 - 8 Lacs
Kochi
Work from Office
Job Title - Data Engineer Sr.Analyst ACS SONG Management Level:Level 10 Sr. Analyst Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture (do not remove the hyperlink) Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)
Posted 1 month ago
4.0 - 8.0 years
11 - 16 Lacs
Hyderabad
Work from Office
Job Summary: We are looking for a highly skilled AWS Data Architect to design and implement scalable, secure, and high-performing data architecture solutions on AWS. The ideal candidate will have hands-on experience in building data lakes, data warehouses, and data pipelines, along with a solid understanding of data governance and cloud security best practices. Roles and Responsibilities: Design and implement data architecture solutions on AWS using services such as S3, Redshift, Glue, Lake Formation, Athena, and Lambda. Develop scalable ETL/ELT workflows and data pipelines using AWS Glue, Apache Spark, or AWS Data Pipeline. Define and implement data governance, security, and compliance strategies, including IAM policies, encryption, and data cataloging. Create and manage data lakes and data warehouses that are scalable, cost-effective, and secure. Collaborate with data engineers, analysts, and business stakeholders to develop robust data models and reporting solutions. Evaluate and recommend tools, technologies, and best practices to optimize data architecture and ensure high-quality solutions. Ensure data quality, performance tuning, and optimization for large-scale data storage and processing Required Skills and Qualifications: Proven experience in AWS data services such as S3, Redshift, Glue, etc. Strong knowledge of data modeling, data warehousing, and big data architecture. Hands-on experience with ETL/ELT tools and data pipeline frameworks. Good understanding of data security and compliance in cloud environments. Excellent problem-solving skills and ability to work collaboratively with cross-functional teams. Strong verbal and written communication skills. Preferred Skills: AWS Certified Data Analytics – Specialty or AWS Solutions Architect Certification. Experience in performance tuning and optimizing large datasets.
Posted 1 month ago
2.0 - 4.0 years
6 - 10 Lacs
Hyderabad
Hybrid
Company: AstroSoft Technologies (https://www.astrosofttech.com/) Astrosoft is an award-winning company that specializes in the areas of Data, Analytics, Cloud, AI/ML, Innovation, Digital. We have a customer first mindset and take extreme ownership in delivering solutions and projects for our customers and have consistently been recognized by our clients as the premium partner to work with. We bring to bear top tier talent, a robust and structured project execution framework, our significant experience over the years and have an impeccable record in delivering solutions and projects for our clients. Founded in 2004, Headquarters in FL,USA, Corporate Office - India, Hyderabad Benefits from Astrosoft Technologies H1B Sponsorship (Depends on Project & Performance) Lunch & Dinner (Every day) Health Insurance Coverage- Group Industry Standards Leave Policy Skill Enhancement Certification Hybrid Mode Role & responsibilities Job Title: AWS Engineer Required Skills : Minimum of 2 + years direct experience with AWS Data Engineer Strong Experience in AWS Services like Redshift & ETL Glue, Spark, Python, Lambda,Kafka,S3, EMR Etc experience is a must. Monitoring tools - Cloudwatch Experience in Development & Support Projects as well. Strong verbal and written communication skills Strong experience and understanding of streaming architecture and development practices using kafka, spark, flink etc, Strong AWS development experience using S3, SNS, SQS, MWAA (Airflow) Glue, DMS and EMR. Strong knowledge of one or more programing languages Python/Java/Scala (ideally Python) Experience using Terraform to build IAC components in AWS. Strong experience with ETL Tools in AWS; ODI experience is as plus. Strong experience with Database Platforms: Oracle, AWS Redshift Strong experience in SQL tuning, tuning ETL solutions, physical optimization of databases. Very familiar with SRE concepts which includes evaluating and implementing monitoring and observability tools like Splunk, Data Dog, CloudWatch and other job, log or dashboard concepts for customer support and application health checks. Ability to collaborate with our business partners to understand and implement their requirements. Excellent interpersonal skills and be able to build consensus across teams. Strong critical thinking and ability to think out-of-the box. Self-motivated and able to perform under pressure. AWS certified (preferred) Qualifications: Education: Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience). Experience: Proven experience as an AWS Support Engineer or in a similar role. Hands-on experience with a wide range of AWS services (e.g., EC2, S3, RDS, Lambda, CloudFormation, IAM). Soft Skills: Excellent communication and interpersonal skills. Strong problem-solving and analytical skills. Ability to work independently and as part of a team. Customer-focused mindset with a commitment to delivering high-quality support. What We Offer: Competitive salary and benefits package. Opportunities for professional growth and development. A collaborative and supportive work environment. Access to the latest AWS technologies and training resources. If you are passionate about cloud technology and enjoy helping customers solve complex technical challenges, we would love to hear from you! Acknowledge the mail with your updated cv, Julekha.Gousiya@astrosofttech.com Details As we discussed, Please revert with your acknowledgment. Total Experience- Aws Redshift Glue- Current Location- Current Company- C-CTC- Ex-CTC – Offer – NP – Ready to Relocate Hyderabad (Y/N) – Yes (Hybrid ) –(Y/N) - Yes
Posted 1 month ago
6.0 - 10.0 years
12 - 20 Lacs
Hyderabad
Hybrid
AWS (EMR, S3, Glue, Airflow, RDS, Dynamodb, similar) CICD (Jenkins or another) Relational Databases experience (any) No SQL databases experience (any) Microservices or Domain services or API gateways or similar Containers (Docker, K8s, similar)
Posted 1 month ago
5.0 - 9.0 years
12 - 19 Lacs
Hyderabad
Work from Office
Responsibilities: * Design, develop & maintain data pipelines using Airflow, Python & SQL. * Optimize performance through Spark & Splunk analytics. * Collaborate with cross-functional teams on big data initiatives. * AWS
Posted 1 month ago
2.0 - 3.0 years
6 - 7 Lacs
Pune
Work from Office
Data Engineer Job Description : Jash Data Sciences: Letting Data Speak! Do you love solving real-world data problems with the latest and best techniques? And having fun while solving them in a team! Then come and join our high-energy team of passionate data people. Jash Data Sciences is the right place for you. We are a cutting-edge Data Sciences and Data Engineering startup based in Pune, India. We believe in continuous learning and evolving together. And we let the data speak! What will you be doing? You will be discovering trends in the data sets and developing algorithms to transform raw data for further analytics Create Data Pipelines to bring in data from various sources, with different formats, transform it, and finally load it to the target database. Implement ETL/ ELT processes in the cloud using tools like AirFlow, Glue, Stitch, Cloud Data Fusion, and DataFlow. Design and implement Data Lake, Data Warehouse, and Data Marts in AWS, GCP, or Azure using Redshift, BigQuery, PostgreSQL, etc. Creating efficient SQL queries and understanding query execution plans for tuning queries on engines like PostgreSQL. Performance tuning of OLAP/ OLTP databases by creating indices, tables, and views. Write Python scripts for the orchestration of data pipelines Have thoughtful discussions with customers to understand their data engineering requirements. Break complex requirements into smaller tasks for execution. What do we need from you? Strong Python coding skills with basic knowledge of algorithms/data structures and their application. Strong understanding of Data Engineering concepts including ETL, ELT, Data Lake, Data Warehousing, and Data Pipelines. Experience designing and implementing Data Lakes, Data Warehouses, and Data Marts that support terabytes of scale data. A track record of implementing Data Pipelines on public cloud environments (AWS/GCP/Azure) is highly desirable A clear understanding of Database concepts like indexing, query performance optimization, views, and various types of schemas. Hands-on SQL programming experience with knowledge of windowing functions, subqueries, and various types of joins. Experience working with Big Data technologies like PySpark/ Hadoop A good team player with the ability to communicate with clarity Show us your git repo/ blog! Qualification 1-2 years of experience working on Data Engineering projects for Data Engineer I 2-5 years of experience working on Data Engineering projects for Data Engineer II 1-5 years of Hands-on Python programming experience Bachelors/Masters' degree in Computer Science is good to have Courses or Certifications in the area of Data Engineering will be given a higher preference. Candidates who have demonstrated a drive for learning and keeping up to date with technology by continuing to do various courses/self-learning will be given high preference.
Posted 1 month ago
8.0 - 12.0 years
25 - 40 Lacs
Chennai
Work from Office
We are seeking a highly skilled Data Architect to design and implement robust, scalable, and secure data solutions on AWS Cloud. The ideal candidate should have expertise in AWS services, data modeling, ETL processes, and big data technologies, with hands-on experience in Glue, DMS, Python, PySpark, and MPP databases like Snowflake, Redshift, or Databricks. Key Responsibilities: Architect and implement data solutions leveraging AWS services such as EC2, S3, IAM, Glue (Mandatory), and DMS for efficient data processing and storage. Develop scalable ETL pipelines using AWS Glue, Lambda, and PySpark to support data transformation, ingestion, and migration. Design and optimize data models following Medallion architecture, Data Mesh, and Enterprise Data Warehouse (EDW) principles. Implement data governance, security, and compliance best practices using IAM policies, encryption, and data masking. Work with MPP databases such as Snowflake, Redshift, or Databricks, ensuring performance tuning, indexing, and query optimization. Collaborate with cross-functional teams, including data engineers, analysts, and business stakeholders, to design efficient data integration strategies. Ensure high availability and reliability of data solutions by implementing monitoring, logging, and automation in AWS. Evaluate and recommend best practices for ETL workflows, data pipelines, and cloud-based data warehousing solutions. Troubleshoot performance bottlenecks and optimize query execution plans, indexing strategies, and data partitioning. Job Requirement Required Qualifications & Skills: Strong expertise in AWS Cloud Services: Compute (EC2), Storage (S3), and security (IAM). Proficiency in programming languages: Python, PySpark, and AWS Lambda. Mandatory experience in ETL tools: AWS Glue and DMS for data migration and transformation. Expertise in MPP databases: Snowflake, Redshift, or Databricks; knowledge of RDBMS (Oracle, SQL Server) is a plus. Deep understanding of data modeling techniques: Medallion architecture, Data Mesh, EDW principles. Experience in designing and implementing large-scale, high-performance data solutions. Strong analytical and problem-solving skills, with the ability to optimize data pipelines and storage solutions. Excellent communication and collaboration skills, with experience working in agile environments. Preferred Qualifications: AWS Certification (AWS Certified Data Analytics, AWS Certified Solutions Architect, or equivalent). Experience with real-time data streaming (Kafka, Kinesis, or similar). Familiarity with Infrastructure as Code (Terraform, CloudFormation). Understanding of data governance frameworks and compliance standards (GDPR, HIPAA, etc.
Posted 1 month ago
5.0 - 8.0 years
15 - 27 Lacs
Bengaluru
Work from Office
Strong experience with Python, SQL, pySpark, AWS Glue. Good to have - Shell Scripting, Kafka Good knowledge of DevOps pipeline usage (Jenkins, Bitbucket, EKS, Lightspeed) Experience of AWS tools (AWS S3, EC2, Athena, Redshift, Glue, EMR, Lambda, RDS, Kinesis, DynamoDB, QuickSight etc.). Orchestration using Airflow Good to have - Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming Good debugging skills Should have strong hands-on design and engineering background in AWS, across a wide range of AWS services with the ability to demonstrate working on large engagements. Strong experience and implementation of Data lakes, Data warehousing, Data Lakehouse architectures. Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures. Monitor data systems performance and implement optimization strategies. Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership. Demonstrable knowledge of applying Data Engineering best practices (coding practices to DS, unit testing, version control, code review). Experience in Insurance domain preferred.
Posted 1 month ago
5.0 - 10.0 years
12 - 17 Lacs
Hyderabad
Work from Office
Job Area: Information Technology Group, Information Technology Group > IT Data Engineer General Summary: Developer will play an integral role in the PTEIT Machine Learning Data Engineering team. Design, develop and support data pipelines in a hybrid cloud environment to enable advanced analytics. Design, develop and support CI/CD of data pipelines and services. - 5+ years of experience with Python or equivalent programming using OOPS, Data Structures and Algorithms - Develop new services in AWS using server-less and container-based services. - 3+ years of hands-on experience with AWS Suite of services (EC2, IAM, S3, CDK, Glue, Athena, Lambda, RedShift, Snowflake, RDS) - 3+ years of expertise in scheduling data flows using Apache Airflow - 3+ years of strong data modelling (Functional, Logical and Physical) and data architecture experience in Data Lake and/or Data Warehouse - 3+ years of experience with SQL databases - 3+ years of experience with CI/CD and DevOps using Jenkins - 3+ years of experience with Event driven architecture specially on Change Data Capture - 3+ years of Experience in Apache Spark, SQL, Redshift (or) Big Query (or) Snowflake, Databricks - Deep understanding building the efficient data pipelines with data observability, data quality, schema drift, alerting and monitoring. - Good understanding of the Data Catalogs, Data Governance, Compliance, Security, Data sharing - Experience in building the reusable services across the data processing systems. - Should have the ability to work and contribute beyond defined responsibilities - Excellent communication and inter-personal skills with deep problem-solving skills. Minimum Qualifications: 3+ years of IT-related work experience with a Bachelor's degree in Computer Engineering, Computer Science, Information Systems or a related field. OR 5+ years of IT-related work experience without a Bachelors degree. 2+ years of any combination of academic or work experience with programming (e.g., Java, Python). 1+ year of any combination of academic or work experience with SQL or NoSQL Databases. 1+ year of any combination of academic or work experience with Data Structures and algorithms. 5 years of Industry experience and minimum 3 years experience in Data Engineering development with highly reputed organizations- Proficiency in Python and AWS- Excellent problem-solving skills- Deep understanding of data structures and algorithms- Proven experience in building cloud native software preferably with AWS suit of services- Proven experience in design and develop data models using RDBMS (Oracle, MySQL, etc.) Desirable - Exposure or experience in other cloud platforms (Azure and GCP) - Experience working on internals of large-scale distributed systems and databases such as Hadoop, Spark - Working experience on Data Lakehouse platforms (One House, Databricks Lakehouse) - Working experience on Data Lakehouse File Formats (Delta Lake, Iceberg, Hudi) Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
Posted 1 month ago
2.0 - 7.0 years
15 - 20 Lacs
Hyderabad
Work from Office
Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. General Summary Preferred Qualifications 3+ years of experience as a Data Engineer or in a similar role Experience with data modeling, data warehousing, and building ETL pipelines Solid working experience with Python, AWS analytical technologies and related resources (Glue, Athena, QuickSight, SageMaker, etc.,) Experience with Big Data tools , platforms and architecture with solid working experience with SQL Experience working in a very large data warehousing environment, Distributed System. Solid understanding on various data exchange formats and complexities Industry experience in software development, data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets Strong data visualization skills Basic understanding of Machine Learning; Prior experience in ML Engineering a plus Ability to manage on-premises data and make it inter-operate with AWS based pipelines Ability to interface with Wireless Systems/SW engineers and understand the Wireless ML domain; Prior experience in Wireless (5G) domain a plus Education Bachelor's degree in computer science, engineering, mathematics, or a related technical discipline Preferred QualificationsMasters in CS/ECE with a Data Science / ML Specialization Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field OR PhD in Engineering, Information Systems, Computer Science, or related field. 3+ years of experience with Programming Language such as C, C++, Java, Python, etc. Develops, creates, and modifies general computer applications software or specialized utility programs. Analyzes user needs and develops software solutions. Designs software or customizes software for client use with the aim of optimizing operational efficiency. May analyze and design databases within an application area, working individually or coordinating database development as part of a team. Modifies existing software to correct errors, allow it to adapt to new hardware, or to improve its performance. Analyzes user needs and software requirements to determine feasibility of design within time and cost constraints. Confers with systems analysts, engineers, programmers and others to design system and to obtain information on project limitations and capabilities, performance requirements and interfaces. Stores, retrieves, and manipulates data for analysis of system capabilities and requirements. Designs, develops, and modifies software systems, using scientific analysis and mathematical models to predict and measure outcome and consequences of design. Principal Duties and Responsibilities: Completes assigned coding tasks to specifications on time without significant errors or bugs. Adapts to changes and setbacks in order to manage pressure and meet deadlines. Collaborates with others inside project team to accomplish project objectives. Communicates with project lead to provide status and information about impending obstacles. Quickly resolves complex software issues and bugs. Gathers, integrates, and interprets information specific to a module or sub-block of code from a variety of sources in order to troubleshoot issues and find solutions. Seeks others' opinions and shares own opinions with others about ways in which a problem can be addressed differently. Participates in technical conversations with tech leads/managers. Anticipates and communicates issues with project team to maintain open communication. Makes decisions based on incomplete or changing specifications and obtains adequate resources needed to complete assigned tasks. Prioritizes project deadlines and deliverables with minimal supervision. Resolves straightforward technical issues and escalates more complex technical issues to an appropriate party (e.g., project lead, colleagues). Writes readable code for large features or significant bug fixes to support collaboration with other engineers. Determines which work tasks are most important for self and junior engineers, stays focused, and deals with setbacks in a timely manner. Unit tests own code to verify the stability and functionality of a feature.
Posted 1 month ago
5.0 - 8.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Responsibilities: Design and implement the data modeling, data ingestion and data processing for various datasets Design, develop and maintain ETL Framework for various new data source Develop data ingestion using AWS Glue/ EMR, data pipeline using PySpark, Python and Databricks. Build orchestration workflow using Airflow & databricks Job workflow Develop and execute adhoc data ingestion to support business analytics. Proactively interact with vendors for any questions and report the status accordingly Explore and evaluate the tools/service to support business requirement Ability to learn to create a data-driven culture and impactful data strategies. Aptitude towards learning new technologies and solving complex problem. Qualifications: Minimum of bachelors degree. Preferably in Computer Science, Information system, Information technology. Minimum 5 years of experience on cloud platforms such as AWS, Azure, GCP. Minimum 5 year of experience in Amazon Web Services like VPC, S3, EC2, Redshift, RDS, EMR, Athena, IAM, Glue, DMS, Data pipeline & API, Lambda, etc. Minimum of 5 years of experience in ETL and data engineering using Python, AWS Glue, AWS EMR /PySpark and Airflow for orchestration. Minimum 2 years of experience in Databricks including unity catalog, data engineering Job workflow orchestration and dashboard generation based on business requirements Minimum 5 years of experience in SQL, Python, and source control such as Bitbucket, CICD for code deployment. Experience in PostgreSQL, SQL Server, MySQL & Oracle databases. Experience in MPP such as AWS Redshift, AWS EMR, Databricks SQL warehouse & compute cluster. Experience in distributed programming with Python, Unix Scripting, MPP, RDBMS databases for data integration Experience building distributed high-performance systems using Spark/PySpark, AWS Glue and developing applications for loading/streaming data into Databricks SQL warehouse & Redshift. Experience in Agile methodology Proven skills to write technical specifications for data extraction and good quality code. Experience with big data processing techniques using Sqoop, Spark, hive is additional plus Experience in data visualization tools including PowerBI, Tableau. Nice to have experience in UI using Python Flask framework anglular Mandatory Skills: Python for Insights. Experience: 5-8 Years.
Posted 1 month ago
5.0 - 9.0 years
12 - 22 Lacs
Hyderabad
Hybrid
Role & responsibilities Preferred candidate profile What we look for is more than 3 years of experience in (Must have): Core JAVA (8 or above) Springboot RESTful APIs Microservices architecture Maven AWS skills mainly on services such as S3 bucket, step functions, storage gateway, ECS, EC2, DynamoDB, AuroraDB, Lambda functions, Glue
Posted 1 month ago
10.0 - 15.0 years
15 - 30 Lacs
Noida, Pune, Bengaluru
Work from Office
Roles and responsibilities Work closely with the Product Owners and stake holders to design the Technical Architecture for data platform to meet the requirements of the proposed solution. Work with the leadership to set the standards for software engineering practices within the machine learning engineering team and support across other disciplines Play an active role in leading team meetings and workshops with clients. Choose and use the right analytical libraries, programming languages, and frameworks for each task. Help the Data Engineering team produce high-quality code that allows us to put solutions into production Create and own the technical product backlogs for products, help the team to close the backlogs in right time. Refactor code into reusable libraries, APIs, and tools. Help us to shape the next generation of our products. What Were Looking For Total experience in data management area for 10 + years’ experience in the implementation of modern data ecosystems in AWS/Cloud platforms. Strong experience with AWS ETL/File Movement tools (GLUE, Athena, Lambda, Kinesis and other AWS integration stack) Strong experience with Agile Development, SQL Strong experience with Two or Three AWS database technologies (Redshift, Aurora, RDS,S3 & other AWS Data Service ) covering security, policies, access management Strong programming Experience with Python and Spark Strong learning curve for new technologies Experience with Apache Airflow & other automation stack. Excellent with Data Modeling. Excellent oral and written communication skills. A high level of intellectual curiosity, external perspective, and innovation interest Strong analytical, problem solving and investigative skills Experience in applying quality and compliance requirements. Experience with security models and development on large data sets
Posted 1 month ago
8.0 - 13.0 years
25 - 35 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
AWS Architect with a combination of GenAI Developer having exp on services in AWS like Glue, DynamoDB, Sagemaker AWS Infrastructure: S3, Lambda, Glue, Bedrock, Boto3. Glue Infrastructure+Pyspark Snowflake Standard SQL-based DynamoDB JSON and API.
Posted 1 month ago
6.0 - 12.0 years
2 - 11 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Develop and implement efficient data pipelines using Apache Spark (PySpark preferred) to process and analyze large-scale data. Design, build, and optimize complex SQL queries to extract, transform, and load (ETL) data from multiple sources. Orchestrate data workflows using Apache Airflow , ensuring smooth execution and error-free pipelines. Design, implement, and maintain scalable and cost-effective data storage and processing solutions on AWS using S3, Glue, EMR, and Athena . Leverage AWS Lambda and Step Functions for serverless compute and task orchestration in data pipelines. Work with AWS databases like RDS and DynamoDB to ensure efficient data storage and retrieval. Monitor data processing and pipeline health using AWS CloudWatch and ensure smooth operation in production environments. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Perform performance tuning, optimize distributed data processing tasks, and handle scalability issues. Provide troubleshooting and support for data pipeline failures and ensure high availability and reliability. Contribute to the setup and maintenance of CI/CD pipelines for automated deployment and testing of data workflows. Required Skills & Experience : Experience: Minimum of 6+ years of hands-on experience in data engineering or big data development roles, with a focus on designing and building data pipelines and processing systems. Technical Skills: Strong programming skills in Python with hands-on experience in Apache Spark (PySpark preferred). Proficient in writing and optimizing complex SQL queries for data extraction, transformation, and loading. Hands-on experience with Apache Airflow for orchestration of data workflows and pipeline management. In-depth understanding and practical experience with AWS services : Data Storage & Processing: S3, Glue, EMR, Athena Compute & Execution: Lambda, Step Functions Databases: RDS, DynamoDB Monitoring: CloudWatch Experience with distributed data processing, parallel computing, and performance tuning techniques. Strong analytical and problem-solving skills to troubleshoot and optimize data workflows and pipelines. Familiarity with CI/CD pipelines and DevOps practices for continuous integration and automated deployments is a plus. Preferred Qualifications: Familiarity with other cloud platforms (Azure, Google Cloud) and services related to data engineering. Experience in handling unstructured and semi-structured data and working with data lakes. Knowledge of containerization technologies such as Docker or orchestration systems like Kubernetes . Experience with NoSQL databases or data warehouses like Redshift or BigQuery is a plus. Qualifications: Education: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. Experience: Minimum of 6+ years in a data engineering role with strong expertise in AWS and big data processing frameworks.
Posted 1 month ago
4.0 - 6.0 years
16 - 18 Lacs
Remote, , India
On-site
3-5 years of relevant experience in experience inSql,PL/SQL,Python 2-3 years experience working with Cloud data platforms such as Snowflake,Redshift,Databricks etc 2-3 years experience in AWS (DMS,Glue,Lambda) 3-5 years experience in ETL/ELT ,Datamodelling such as Star Schema etc. Programming skills for data analytics.(PL/SQL,Python) Business Analysis / Requirements development Data Modelling(Entity Relationships, Star ,Snowflake Schema) Data Analysis Solution design & architecture for data products Strong technical and business communication skills Agile Methodologies Strong Interpersonal, Verbal and Written Communication Exceptional Analytical, Problem-Solving, and Troubleshooting Abilitie
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough