Jobs
Interviews

564 Glue Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be a valuable member of the data engineering team, contributing to the development of data pipelines, data transformations, and exploring new data patterns through proof of concept initiatives. Your role will also involve optimizing existing data feeds and implementing enhancements to improve data processes. Your primary skills should include a strong understanding of RDBMS concepts, hands-on experience with the AWS Cloud platform and its services such as IAM, EC2, Lambda, RDS, Timestream, and Glue. Additionally, proficiency in data streaming tools like Kafka, hands-on experience with ETL/ELT tools, and familiarity with databases like Snowflake or Postgres are essential. It would be beneficial if you have an understanding of data modeling techniques, as this knowledge would be considered a bonus for this role.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a talented Big Data Engineer, you will be responsible for developing and managing our company's Big Data solutions. Your role will involve designing and implementing Big Data tools and frameworks, implementing ELT processes, collaborating with development teams, building cloud platforms, and maintaining the production system. To excel in this position, you should possess in-depth knowledge of Hadoop technologies, exceptional project management skills, and advanced problem-solving abilities. A successful Big Data Engineer comprehends the company's needs and establishes scalable data solutions to meet current and future requirements effectively. Your responsibilities will include meeting with managers to assess the company's Big Data requirements, developing solutions on AWS utilizing tools like Apache Spark, Databricks, Delta Tables, EMR, Athena, Glue, and Hadoop. You will also be involved in loading disparate data sets, conducting pre-processing services using tools such as Athena, Glue, and Spark, collaborating with software research and development teams, building cloud platforms for application development, and ensuring the maintenance of production systems. The requirements for this role include a minimum of 5 years of experience as a Big Data Engineer, proficiency in Python & PySpark, expertise in Hadoop, Apache Spark, Databricks, Delta Tables, and AWS data analytics services. Additionally, you should have extensive experience with Delta Tables, JSON, Parquet file formats, familiarity with AWS data analytics services like Athena, Glue, Redshift, EMR, knowledge of Data warehousing, NoSQL, and RDBMS databases. Good communication skills and the ability to solve complex data processing and transformation-related problems are essential for success in this role.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

As an experienced Software/Data Engineer with a passion for creating meaningful solutions, you will be joining a global team of innovators at a Siemens company. In this role, you will be responsible for developing data integration solutions using Java, Scala, and/or Python, with a focus on data and Business Intelligence (BI). Your primary responsibilities will include building data pipelines, data transformation, and data modeling to support various integration methods and information delivery techniques. To excel in this position, you should have a Bachelor's degree in an Engineering or Science discipline or equivalent experience, along with at least 5 years of software/data engineering experience. Additionally, you should have a minimum of 3 years of experience in a data and BI focused role. Proficiency in data integration development using languages such as Python, PySpark, and SparkSQL, as well as experience with relational databases and SQL optimization, are essential for this role. Experience with AWS-based data services technologies (e.g., Glue, RDS, Athena) and Snowflake CDW, along with familiarity with BI tools like PowerBI, will be beneficial. Your willingness to experiment with new technologies and adapt to agile development practices will be key to your success in this role. Join us in creating a brighter future where smarter infrastructure protects the environment and connects us all. Our culture is built on collaboration, support, and a commitment to helping each other grow both personally and professionally. If you are looking to make a positive impact and contribute to a more sustainable world, we invite you to explore how far your passion can take you with us.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

indore, madhya pradesh

On-site

As a Senior Data Scientist with 5+ years of experience, you will play a crucial role in our team based in Indore/Pune. Your responsibilities will involve designing and implementing models, extracting insights from data, and interpreting complex data structures to facilitate business decision-making. You should have a strong background in Machine Learning areas such as Natural Language Processing, Machine Vision, Time Series, etc. Your expertise should extend to Model Tuning, Model Validation, Supervised and Unsupervised Learning. Additionally, hands-on experience with model development, data preparation, and deployment of models for training and inference is essential. Proficiency in descriptive and inferential statistics, hypothesis testing, and data analysis and exploration are key skills required for this role. You should be adept at developing code that enables reproducible data analysis. Familiarity with AWS services like Sagemaker, Lambda, Glue, Step Functions, and EC2 is expected. Knowledge of data science code development and deployment IDEs such as Databricks, Anaconda distribution, and similar tools is essential. You should also possess expertise in ML algorithms related to time-series, natural language processing, optimization, object detection, topic modeling, clustering, and regression analysis. Your skills should include proficiency in Hive/Impala, Spark, Python, Pandas, Keras, SKLearn, StatsModels, Tensorflow, and PyTorch. Experience with end-to-end model deployment and production for at least 1 year is required. Familiarity with Model Deployment in Azure ML platform, Anaconda Enterprise, or AWS Sagemaker is preferred. Basic knowledge of deep learning algorithms like MaskedCNN, YOLO, and visualization and analytics/reporting tools such as Power BI, Tableau, Alteryx would be advantageous for this role.,

Posted 1 week ago

Apply

7.0 - 12.0 years

35 - 50 Lacs

Hyderabad

Work from Office

Job Description: Spark, Java Strong SQL writing skills, data discovery, data profiling, Data exploration, Data wrangling skills Kafka, AWS s3, lake formation, Athena, glue, Autosys or similar tools, FastAPI (secondary) Strong SQL skills to support data analysis and imbedded business logic in SQL, data profiling and gap assessment Collaborate with development and business SMEs within technology to understand data requirements, perform data analysis to support and Validate business logic, data integrity and data quality rules within a centralized data platform Experience working within the banking/financial services industry with solid understanding of financial products and business processes

Posted 1 week ago

Apply

4.0 - 8.0 years

14 - 24 Lacs

Hyderabad

Work from Office

Experience Required : Minimum 4.5+ years Job Summary: We are seeking a skilled Data Engineer with a strong background in data ingestion, processing, and storage. The ideal candidate will have experience working with various data sources and technologies, particularly in a cloud environment. You will be responsible for designing and implementing data pipelines, ensuring data quality, and optimizing data storage solutions. Key Responsibilities: Design, develop, and maintain scalable data pipelines for data ingestion and processing using Python, Spark, and AWS services. Work with on-prem Oracle databases, batch files, and Confluent Kafka for data sourcing. Implement and manage ETL processes using AWS Glue and EMR for batch and streaming data. Develop and maintain data storage solutions using Medallion Architecture in S3, Redshift, and Oracle. Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Monitor and optimize data workflows using Airflow and other orchestration tools. Ensure data quality and integrity throughout the data lifecycle. Implement CI/CD practices for data pipeline deployment using Terraform and other tools. Utilize monitoring and logging tools such as CloudWatch, Datadog, and Splunk to ensure system reliability and performance. Communicate effectively with stakeholders to gather requirements and provide updates on project status. Technical Skills Required: Proficient in Python for data processing and automation. Strong experience with Apache Spark for large-scale data processing. Familiarity with AWS S3 for data storage and management. Experience with Kafka for real-time data streaming. Knowledge of Redshift for data warehousing solutions. Proficient in Oracle databases for data management. Experience with AWS Glue for ETL processes. Familiarity with Apache Airflow for workflow orchestration. Experience with EMR for big data processing. Mandatory: Strong AWS data engineering skills.

Posted 1 week ago

Apply

4.0 - 7.0 years

7 - 11 Lacs

Noida

Work from Office

Design, implement, and maintain data pipelines for processing large datasets, ensuring data availability, quality, and efficiency for machine learning model training and inference. Collaborate with data scientists to streamline the deployment of machine learning models, ensuring scalability, performance, and reliability in production environments. Develop and optimize ETL (Extract, Transform, Load) processes, ensuring data flow from various sources into structured data storage systems. Automate ML workflows using ML Ops tools and frameworks (e.g., Kubeflow, MLflow, TensorFlow Extended (TFX)). Ensure effective model monitoring, versioning, and logging to track performance and metrics in a production setting. Collaborate with cross-functional teams to improve data architectures and facilitate the continuous integration and deployment of ML models. Work on data storage solutions, including databases, data lakes, and cloud-based storage systems (e.g., AWS, GCP, Azure). Ensure data security, integrity, and compliance with data governance policies. Perform troubleshooting and root cause analysis on production-level machine learning systems. Skills: Glue, Pyspark, AWS Services, Strong in SQL; Nice to have : Redshift, Knowledge of SAS Dataset Mandatory Competencies DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Docker ETL - ETL - AWS Glue DevOps/Configuration Mgmt - Cloud Platforms - AWS DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Containerization (Docker, Kubernetes) Database - Sql Server - SQL Packages

Posted 1 week ago

Apply

6.0 - 11.0 years

20 - 30 Lacs

Bhopal, Hyderabad, Pune

Hybrid

Hello Greetings from NewVision Software!! We are hiring on an immediate basis for the role of Senior / Lead Python Developer + AWS | NewVision Software | Pune, Hyderabad & Bhopal Location | Fulltime Looking for professionals who can join us Immediately or within 15 days is preferred. Please find the job details and description below. NewVision Software PUNE HQ OFFICE 701 &702, Pentagon Tower, P1, Magarpatta City, Hadapsar, Pune, Maharashtra - 411028, India NewVision Software The Hive Corporate Capital, Financial District, Nanakaramguda, Telangana - 500032 NewVision Software IT Plaza, E-8, Bawadiya Kalan Main Rd, near Aura Mall, Gulmohar, Fortune Pride, Shahpura, Bhopal, Madhya Pradesh - 462039 Senior Python and AWS Developer Role Overview: We are looking for a skilled senior Python Developer with strong background in AWS cloud services to join our team. The ideal candidate will be responsible for designing, developing, and maintaining robust backend systems, ensuring high performance and responsiveness to requests from the front end. Responsibilities : Develop, test, and maintain scalable web applications using Python and Django. Design and manage relational databases with PostgreSQL, including schema design and optimization. Build RESTful APIs and integrate with third-party services as needed. Work with AWS services including EC2, EKS, ECR, S3, Glue, Step Functions, EventBridge Rules, Lambda, SQS, SNS, and RDS. Collaborate with front-end developers to deliver seamless end-to-end solutions. Write clean, efficient, and well-documented code following best practices. Implement security and data protection measures in applications. Optimize application performance and troubleshoot issues as they arise. Participate in code reviews, testing, and continuous integration processes. Stay current with the latest trends and advancements in Python, Django, and database technologies. Mentor junior python developers. Requirements : 6+ years of professional experience in Python development. Strong proficiency with Django web framework. Experience working with PostgreSQL, including complex queries and performance tuning. Familiarity with RESTful API design and integration. Strong understanding of OOP, SOLID principles, and design patterns. Strong knowledge of Python multithreading and multiprocessing. Experience with AWS services: S3, Glue, Step Functions, EventBridge Rules, Lambda, SQS, SNS, IAM, Secret Manager, KMS and RDS. Understanding of version control systems (Git). Knowledge of security best practices and application deployment. Basic understanding of Microservices architecture. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Nice to Have Experience with Docker, Kubernetes, or other containerization tools. Good to have front-end technologies (React). Experience with CI/CD pipelines and DevOps practices. Experience with infrastructure as code tools like Terraform. Education : Bachelors degree in computer science engineering or related field (or equivalent experience). Do share your resume with my email address: imran.basha@newvision-software.com Please share your experience details: Total Experience: Relevant Experience: Exp: Python: Yrs, AWS: Yrs, PostgreSQL: Yrs Rest API: Yrs, Django: Current CTC: Expected CTC: Notice / Serving (LWD): Any Offer in hand: LPA Current Location Preferred Location: Education: Please share your resume and the above details for Hiring Process: - imran.basha@newvision-software.com

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

Chennai, Bengaluru

Work from Office

Job Description: Job Title: ETL Testing Experience: 5-8 Years location: Chennai, Bangalore Employment Type: Full Time. J ob Type: Work from Office (Monday - Friday) Shift Timing: 12:30 PM to 9:30 PM Required Skills: Analytics skills to understand requirements to develop test cases, understand and manage data, strong SQL skills. Hands on testing of data pipelines built using Glue, S3, Redshift and Lambda, collaborate with developers to build automated testing where appropriate, understanding of data concepts like data lineage, data integrity and quality, experience testing financial data is a plus

Posted 1 week ago

Apply

5.0 - 7.0 years

15 - 30 Lacs

Gurugram

Remote

Design, develop, and maintain robust data pipelines and ETL/ELT processes on AWS. Leverage AWS services such as S3, Glue, Lambda, Redshift, Athena, EMR , and others to build scalable data solutions. Write efficient and reusable code using Python for data ingestion, transformation, and automation tasks. Collaborate with cross-functional teams including data analysts, data scientists, and software engineers to support data needs. Monitor, troubleshoot, and optimize data workflows for performance, reliability, and cost efficiency. Ensure data quality, security, and governance across all systems. Communicate technical solutions clearly and effectively with both technical and non-technical stakeholders. Required Skills & Qualifications 5+ years of experience in data engineering roles. Strong hands-on experience with Amazon Web Services (AWS) , particularly in data-related services (e.g., S3, Glue, Lambda, Redshift, EMR, Athena). Proficiency in Python for scripting and data processing. Experience with SQL and working with relational databases. Solid understanding of data architecture, data modeling, and data warehousing concepts. Experience with CI/CD pipelines and version control tools (e.g., Git). Excellent verbal and written communication skills . Proven ability to work independently in a fully remote environment. Preferred Qualifications Experience with workflow orchestration tools like Apache Airflow or AWS Step Functions. Familiarity with big data technologies such as Apache Spark or Hadoop. Exposure to infrastructure-as-code tools like Terraform or CloudFormation. Knowledge of data privacy and compliance standards.

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 20 Lacs

Chennai, Bengaluru

Work from Office

Job Description: Job Title: Data Engineer Experience: 5-8 Years location: Chennai, Bangalore Employment Type: Full Time. Job Type: Work from Office (Monday - Friday) Shift Timing: 12:30 PM to 9:30 PM Required Skills: 5-8 years' experience candidate as back end - data engineer. Strong experience in SQL. Strong knowledge and experience Python and Py Spark. Experience in AWS. Experience in Docker and OpenShift. Hands on experience with REST Concepts. Design and Develop business solutions on the data front. Experience in implementation of new enhancements and also handling defect triage. Candidate must have strong analytical abilities. Skills/ Competency Additionally Preferred Jira, Bit Bucket Experience on Kafka. Experience on Snowflake. Domain knowledge in Banking. Analytical skills. Excellent communication skills Working knowledge of Agile. Thanks & Regards, Suresh Kumar Raja, CGI.

Posted 1 week ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Hyderabad

Work from Office

Responsibilities: Design, build, and optimize data pipelines to ingest, process, transform, and load data from various sources into our data platform Implement and maintain ETL workfl ows using tools like Debezium, Kafka, Airfl ow, and Jenkins to ensure reliable and timely data processing Develop and optimize SQL and NoSQL database schemas, queries, and stored procedures for effi cient data retrieval and processing *** Work with both relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB, DocumentDB) to build scalable data solutions Design and implement data warehouse solutions that support analytical needs and machine learning applications Collaborate with data scientists and ML engineers to prepare data for AI/ML models and implement data-driven features Implement data quality checks, monitoring, and alerting to ensure data accuracy and reliability Optimize query performance across various database systems through indexing, partitioning, and query refactoring Develop and maintain documentation for data models, pipelines, and processes Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs Stay current with emerging technologies and best practices in data engineering Requirements: 5+ years of experience in data engineering or related roles with a proven track record of building data pipelines and infrastructure Strong profi ciency in SQL and experience with relational databases like MySQL and PostgreSQL Hands-on experience with NoSQL databases such as MongoDB or AWS DocumentDB Expertise in designing, implementing, and optimizing ETL processes using tools like Kafka, Debezium, Airfl ow, or similar technologies Experience with data warehousing concepts and technologies Solid understanding of data modeling principles and best practices for both operational and analytical systems Proven ability to optimize database performance, including query optimization, indexing strategies, and database tuning Experience with AWS data services such as RDS, Redshift, S3, Glue, Kinesis, and ELK stack Profi ciency in at least one programming language (Python, Node.js, Java) Experience with version control systems (Git) and CI/CD pipelines Job Description: Experience with graph databases (Neo4j, Amazon Neptune) Knowledge of big data technologies such as Hadoop, Spark, Hive, and data lake architectures Experience working with streaming data technologies and real-time data processing Familiarity with data governance and data security best practices Experience with containerization technologies (Docker, Kubernetes) Understanding of fi nancial back-offi ce operations and FinTech domain Experience working in a high-growth startup environment

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 11 Lacs

Noida

Work from Office

Expert inPython(5+ years) Expert in Django or any similar frameworks (5+ years) Experience (2+ years) inTypeScript, JavaScript and JS frameworks (Angular > 2 with Angular Material) Good knowledge of RDMS (preferably Postgres) Experience and sound knowledge of AWS services (ECS, lambda, deployment pipelines etc). Excellent written and verbal communication skills. Very good analytical and problem-solving skills. Ability to pick up new technologies. Write clean, maintainable and efficient code. Willingness to learn and understand the business domain. Mandatory Competencies Programming Language - Python - Django Beh - Communication User Interface - Typescript - Typescript User Interface - JavaScript - JavaScript Cloud - AWS - ECS Cloud - AWS - AWS Lambda,AWS EventBridge, AWS Fargate

Posted 1 week ago

Apply

7.0 - 9.0 years

6 - 10 Lacs

Chennai

Work from Office

As a Technical Lead - Cloud Data Platform (AWS) at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on AWS Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Should be open to new ideas and be willing to learn and develop new skills. Should also be able to work well under pressure and manage multiple tasks and priorities. Nice-to-have skills Qualifications Qualifications 7-9 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 1 week ago

Apply

3.0 - 8.0 years

6 - 10 Lacs

Gurugram

Work from Office

Understands the process flow and the impact on the project module outcome. Works on coding assignments for specific technologies basis the project requirements and documentation available Debugs basic software components and identifies code defects. Focusses on building depth in project specific technologies. Expected to develop domain knowledge along with technical skills. Effectively communicate with team members, project managers and clients, as required. A proven high-performer and team-player, with the ability to take the lead on projects. Design and create S3 buckets and folder structures (raw, cleansed_data, output, script, temp-dir, spark-ui) Develop AWS Lambda functions (Python/Boto3) to download Bhav Copy via REST API and ingest into S3 Author and maintain AWS Glue Spark jobs to: partition data by scrip, year and month convert CSV to Parquet with Snappy compression Configure and run AWS Glue Crawlers to populate the Glue Data Catalog Write and optimize AWS Athena SQL queries to generate business-ready datasets Monitor, troubleshoot and tune data workflows for cost and performance Document architecture, code and operational runbooks Collaborate with analytics and downstream teams to understand requirements and deliver SLAs Technical Skills 3+ years hands-on experience with AWS data services (S3, Lambda, Glue, Athena) PostgreSQL basics Proficient in SQL and data partitioning strategies Experience with Parquet file formats and compression techniques (Snappy) Ability to configure Glue Crawlers and manage the AWS Glue Data Catalog Understanding of serverless architecture and best practices in security, encryption and cost control Good documentation, communication and problem-solving skills Nice-to-have skills Qualifications Qualifications 3-5 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 1 week ago

Apply

2.0 - 3.0 years

5 - 9 Lacs

Kochi, Coimbatore, Thiruvananthapuram

Work from Office

Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering Qualification Experience:3.5 -5 years of experience is required

Posted 1 week ago

Apply

4.0 - 6.0 years

6 - 10 Lacs

Gurugram

Work from Office

Role Description As a Senior Cloud Data Platform (AWS) Specialist at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on AWS Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 1 week ago

Apply

4.0 - 6.0 years

6 - 10 Lacs

Gurugram

Work from Office

Role Description As a Senior Cloud Data Platform (AWS) Specialist at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on AWS Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 1 week ago

Apply

5.0 - 12.0 years

0 - 0 Lacs

hyderabad, telangana

On-site

As a Senior Software Engineer with 5-8 years of experience, you will be responsible for developing efficient and scalable software solutions. Your primary focus will be on utilizing Core JAVA (8 or above), Springboot, RESTful APIs, and Microservices architecture to deliver high-quality applications. Additionally, you will be expected to work with Maven and possess strong AWS skills, particularly in services like S3 bucket, step functions, storage gateway, ECS, EC2, DynamoDB, AuroraDB, Lambda functions, and Glue. In this role, it is essential to have expertise in Code management using Git, setting up CI/CD Pipelines with tools like Jenkins/GitHub, and working with Docker/Kubernetes for containerization. Your knowledge of SQL/NoSQL databases such as PostgreSQL and MongoDB will be beneficial. Experience in Testing frameworks like Unit Testing (JUnit/Mockito), integration testing, mutation testing, and TDD is crucial. Proficiency in Kafka, graphQL/Supergraph, and using splunk/honeycomb dashboards will be advantageous. You will also be involved in interacting with APIs, ensuring security in AWS, managing Containers, and holding AWS certifications. The ideal candidate should have strong communication skills, be a team player, and possess a proactive attitude towards problem-solving. This position is based in Hyderabad and offers a competitive salary based on your experience level. If you are passionate about software engineering and have a solid background in the mentioned technologies, we encourage you to apply and be part of our dynamic team.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

coimbatore, tamil nadu

On-site

You are a seasoned Confluent & Oracle EBS Cloud Engineer with over 10 years of experience, responsible for leading the design and implementation of scalable, cloud-native data solutions. Your role involves modernizing enterprise data infrastructure, driving real-time data streaming initiatives, and migrating legacy ERP systems to AWS-based platforms. Your key responsibilities include architecting and implementing cloud-based data platforms using AWS services such as Redshift, Glue, DMS, and Data Lake solutions. You will lead the migration of Oracle E-Business Suite or similar ERP systems to AWS while ensuring data integrity and performance. Additionally, you will design and drive the implementation of Confluent Kafka for real-time data streaming across enterprise systems. It is essential for you to define and enforce data architecture standards, governance policies, and best practices. Collaborating with engineering, data, and business teams to align architecture with strategic goals is also a crucial aspect of your role. Furthermore, you will optimize data pipelines and storage for scalability, reliability, and cost-efficiency. To excel in this role, you must possess 10+ years of experience in data architecture, cloud engineering, or enterprise systems design. Deep expertise in AWS services, including Redshift, Glue, DMS, and Data Lake architectures, is required. Proven experience with Confluent Kafka for real-time data streaming and event-driven architectures is essential. Hands-on experience in migrating large-scale ERP systems (e.g., Oracle EBS) to cloud platforms is a must. Strong understanding of data governance, security, and compliance in cloud environments, as well as proficiency in designing scalable, fault-tolerant data systems, are also necessary. Preferred qualifications include experience with data modeling, metadata management, and lineage tracking, familiarity with infrastructure-as-code and CI/CD practices, and strong communication and leadership skills to guide cross-functional teams.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Platform Engineer Lead at Barclays, your role is crucial in building and maintaining systems that collect, store, process, and analyze data, including data pipelines, data warehouses, and data lakes. Your responsibility includes ensuring the accuracy, accessibility, and security of all data. To excel in this role, you should have hands-on coding experience in Java or Python and a strong understanding of AWS development, encompassing various services such as Lambda, Glue, Step Functions, IAM roles, and more. Proficiency in building efficient data pipelines using Apache Spark and AWS services is essential. You are expected to possess strong technical acumen, troubleshoot complex systems, and apply sound engineering principles to problem-solving. Continuous learning and staying updated with new technologies are key attributes for success in this role. Design experience in diverse projects where you have led the technical development is advantageous, especially in the Big Data/Data Warehouse domain within Financial services. Additional skills in enterprise-level software solutions development, knowledge of different file formats like JSON, Iceberg, Avro, and familiarity with streaming services such as Kafka, MSK, and Kinesis are highly valued. Effective communication, collaboration with cross-functional teams, documentation skills, and experience in mentoring team members are also important aspects of this role. Your accountabilities will include the construction and maintenance of data architectures pipelines, designing and implementing data warehouses and data lakes, developing processing and analysis algorithms, and collaborating with data scientists to deploy machine learning models. You will also be expected to contribute to strategy, drive requirements for change, manage resources and policies, deliver continuous improvements, and demonstrate leadership behaviors if in a leadership role. Ultimately, as a Data Platform Engineer Lead at Barclays in Pune, you will play a pivotal role in ensuring data accuracy, accessibility, and security while leveraging your technical expertise and collaborative skills to drive innovation and excellence in data management.,

Posted 1 week ago

Apply

5.0 - 6.0 years

12 - 16 Lacs

Thiruvananthapuram

Remote

Build & manage infra for data strage,process & Analysis Exp in AWS Cloud Services (Glue, Lambda, Athena, Lakehouse) AWS CDK for Infrastructure-as-Code (IaC) with typescript Skills in Python, Pyspark, Spark SQL, Typescript Required Candidate profile 5 to 6 Years Data pipeline development & orchestration using AWS Glue Leadership experience UK Clients, Work timings will be aligned with the client's requirements and may follow UK time zones

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

maharashtra

On-site

As a Solutions Architect with over 7 years of experience, you will have the opportunity to leverage your expertise in cloud data solutions to architect scalable and modern solutions on AWS. In this role at Quantiphi, you will be a key member of our high-impact engineering teams, working closely with clients to solve complex data challenges and design cutting-edge data analytics solutions. Your responsibilities will include acting as a trusted advisor to clients, leading discovery/design workshops with global customers, and collaborating with AWS subject matter experts to develop compelling proposals and Statements of Work (SOWs). You will also represent Quantiphi in various forums such as tech talks, webinars, and client presentations, providing strategic insights and solutioning support during pre-sales activities. To excel in this role, you should have a strong background in AWS Data Services including DMS, SCT, Redshift, Glue, Lambda, EMR, and Kinesis. Your experience in data migration and modernization, particularly with Oracle, Teradata, and Netezza to AWS, will be crucial. Hands-on experience with ETL tools such as SSIS, Informatica, and Talend, as well as a solid understanding of OLTP/OLAP, Star & Snowflake schemas, and data modeling methodologies, are essential for success in this position. Additionally, familiarity with backend development using Python, APIs, and stream processing technologies like Kafka, along with knowledge of distributed computing concepts including Hadoop and MapReduce, will be beneficial. A DevOps mindset with experience in CI/CD practices and Infrastructure as Code is also desired. Joining Quantiphi as a Solutions Architect is more than just a job it's an opportunity to shape digital transformation journeys and influence business strategies across various industries. If you are a cloud data enthusiast looking to make a significant impact in the field of data analytics, this role is perfect for you.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be responsible for working with AWS CDK using Type Script and CloudFormation template to manage various AWS services such as Redshift, Glue, IAM roles, KMS keys, Secrets Manager, Airflow, SFTP, AWS Lambda, S3, and Event Bridge. Your tasks will include executing grants, store procedures, queries, and Redshift Spectrum to query S3, defining execution roles, debugging jobs, creating IAM roles with fine-grained access, integrating and deploying services, managing KMS keys, configuring Secrets Manager, creating Airflow DAGs, executing serverless AWS Lambda functions, debugging Lambda functions, managing S3 object storage including lifecycle configuration, resource-based policies, and encryption, and setting up event triggers using Lambda Event Bridge with rules. You should have knowledge of AWS Redshift SQL workbench for executing grants and a strong understanding of networking concepts, security, and cloud architecture. Experience with monitoring tools like CloudWatch and familiarity with containerization tools like Docker and Kubernetes would be beneficial. Strong problem-solving skills and the ability to thrive in a fast-paced environment are essential. Virtusa is a company that values teamwork, quality of life, and professional and personal development. With a global team of 27,000 professionals, Virtusa is committed to supporting your growth by providing exciting projects, opportunities to work with cutting-edge technologies, and a collaborative team environment that encourages the exchange of ideas and excellence. At Virtusa, you will have the chance to work with great minds and unleash your full potential in a dynamic and innovative workplace.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

noida, uttar pradesh

On-site

You will be responsible for building the most personalized and intelligent news experiences for India's next 750 million digital users. As Our Principal Data Engineer, your main tasks will include designing and maintaining data infrastructure to power personalization systems and analytics platforms. This involves ensuring seamless data flow from source to consumption, architecting scalable data pipelines to process massive volumes of user interaction and content data, and developing robust ETL processes for large-scale transformations and analytical processing. You will also be involved in creating and maintaining data lakes/warehouses that consolidate data from multiple sources, optimized for ML model consumption and business intelligence. Additionally, you will implement data governance practices and collaborate with the ML team to ensure the right data availability for recommendation systems. To excel in this role, you should have a Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field, along with 8-12 years of data engineering experience, including at least 3 years in a senior role. You must possess expert-level SQL skills and have strong experience in the Apache Spark ecosystem (Spark SQL, Streaming, SparkML), as well as proficiency in Python/Scala. Experience with the AWS data ecosystem (RedShift, S3, Glue, EMR, Kinesis, Lambda, Athena) and ETL frameworks (Glue, Airflow) is essential. A proven track record of building large-scale data pipelines in production environments, particularly in high-traffic digital media, will be advantageous. Excellent communication skills are also required, as you will need to collaborate effectively across teams in a fast-paced environment that demands engineering agility.,

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies