Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Senior Data Lake Developer at BCSS, you will play a crucial role in the advancement and expansion of our advanced analytics practice. Your primary responsibility will involve designing and developing data lakes, managing data flows, and integrating information from diverse sources into a unified data lake platform through an ETL Tool. Furthermore, you will code and oversee delta lake implementations on S3 utilizing technologies such as Databricks or Apache Hoodie. It will be your responsibility to triage, debug, and resolve technical issues related to Data Lakes, as well as design and develop data warehouses for scalability. Evaluating data models, designing data access patterns, and coordinating with business and technical teams throughout the software development life cycle are also key aspects of this role. Additionally, you will participate in making significant technical and architectural decisions, as well as maintain and manage code repositories like Git. To excel in this role, you must possess at least 5 years of experience operating on AWS Cloud with a focus on building Data Lake architectures. You should also have a minimum of 3 years of experience with AWS Data services such as S3, Glue, Lake Formation, EMR, Kinesis, RDS, DMS, and Redshift. In addition, you should have 3+ years of experience building Data Warehouses on platforms like Snowflake, Redshift, HANA, Teradata, Exasol, among others. Proficiency in Spark, building Delta Lakes using technologies like Apache Hoodie or Databricks, and working with ETL tools and technologies are essential requirements. Furthermore, you should have 3+ years of experience in at least one programming language (Python, R, Scala, Java) and hold a Bachelor's degree in computer science, information technology, data science, data analytics, or a related field. Experience with Agile projects and Agile methodology is highly beneficial for this role.,
Posted 19 hours ago
7.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Were looking for a Cloud Architect / Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning, application/API hosting, and enabling data and GenAI workloads through a modern, secure cloud environment. Key Responsibilities ? Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs. ? Develop and manage CI/CD pipelines using Jenkins, AWS CodePipeline, CodeBuild, or GitHub Actions. ? Deploy and host internal tools, APIs, and applications using ECS, EKS, Lambda, API Gateway, and ELB. ? Provision and support analytics and data platforms using S3, Glue, Redshift, Athena, Lake Formation, and orchestration tools like Step Functions or Apache Airflow (MWAA). ? Implement cloud security, networking, and compliance using IAM, VPC, KMS, CloudWatch, CloudTrail, and AWS Config. ? Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements. ? Support GenAI infrastructure, including Amazon Bedrock, SageMaker, or integrations with APIs like OpenAI. Requirements ? 7-10 years of experience in cloud engineering, DevOps, or cloud architecture roles. ? Strong hands-on expertise with the AWS ecosystem and tools listed above. ? Proficiency in scripting (e.g., Python, Bash) and infrastructure automation. ? Experience deploying containerized workloads using Docker, ECS, EKS, or Fargate. ? Familiarity with data engineering and GenAI workflows is a plus. ? AWS certifications (e.g., Solutions Architect, DevOps Engineer) are preferred. Show more Show less
Posted 1 day ago
10.0 - 14.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role - Cloud Architect Analytics & Data Products Were looking for a Cloud Architect / Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning , application/API hosting , and enabling data and GenAI workloads through a modern, secure cloud environment. Key Responsibilities Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs. Develop and manage CI/CD pipelines using Jenkins , AWS Code Pipeline , Code Build , or GitHub Actions . Deploy and host internal tools, APIs, and applications using ECS , EKS , Lambda , API Gateway , and ELB . Provision and support analytics and data platforms using S3 , Glue , Redshift , Athena , Lake Formation , and orchestration tools like Step Functions or Apache Airflow (MWAA) . Implement cloud security, networking, and compliance using IAM , VPC , KMS , CloudWatch , CloudTrail , and AWS Config . Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements. Support GenAI infrastructure, including Amazon Bedrock , Sage Maker , or integrations with APIs like Open AI . Requirements 10-14 years of experience in cloud engineering, DevOps, or cloud architecture roles. Strong hands-on expertise with the AWS ecosystem and tools listed above. Proficiency in scripting (e.g., Python , Bash ) and infrastructure automation. Experience deploying containerized workloads using Docker , ECS , EKS , or Fargate . Familiarity with data engineering and GenAI workflows is a plus. AWS certifications (e.g., Solutions Architect , DevOps Engineer ) are preferred. Show more Show less
Posted 4 days ago
3.0 - 7.0 years
0 Lacs
kochi, kerala
On-site
You should have experience in designing and building serverless data lake solutions using a layered components architecture. This includes expertise in Ingestion, Storage, processing, Security & Governance, Data cataloguing & Search, and Consumption layer. Proficiency in AWS serverless technologies like Lake formation, Glue, Glue Python, Glue Workflows, Step Functions, S3, Redshift, Quick sight, Athena, AWS Lambda, and Kinesis is required. It is essential to have hands-on experience with Glue. You must be skilled in designing, building, orchestrating, and deploying multi-step data processing pipelines using Python and Java. Experience in managing source data access security, configuring authentication and authorization, and enforcing data policies and standards is also necessary. Additionally, familiarity with AWS Environment setup and configuration is expected. A minimum of 6 years of relevant experience with at least 3 years in building solutions using AWS is mandatory. The ability to work under pressure and a commitment to meet customer expectations are essential traits for this role. If you meet these requirements and are ready to take on this challenging opportunity, please reach out to hr@Stanratech.com.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
Join us as a Cloud Data Engineer at Barclays, where you'll spearhead the evolution of the digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize digital offerings, ensuring unparalleled customer experiences. You may be assessed on key critical skills relevant for success in the role, such as risk and control, change and transformations, business acumen, strategic thinking, and digital technology, as well as job-specific skill sets. To be successful as a Cloud Data Engineer, you should have experience with: - Experience on AWS Cloud technology for data processing and a good understanding of AWS architecture. - Experience with computer services like EC2, Lambda, Auto Scaling, VPC, EC2. - Experience with Storage and container services like ECS, S3, DynamoDB, RDS. - Experience with Management & Governance KMS, IAM, CloudFormation, CloudWatch, CloudTrail. - Experience with Analytics services such as Glue, Athena, Crawler, Lake Formation, Redshift. - Experience with Solution delivery for data processing components in larger End to End projects. Desirable skill sets/good to have: - AWS Certified professional. - Experience in Data Processing on Databricks and unity catalog. - Ability to drive projects technically with right first deliveries within schedule and budget. - Ability to collaborate across teams to deliver complex systems and components and manage stakeholders" expectations well. - Understanding of different project methodologies, project lifecycles, major phases, dependencies and milestones within a project, and the required documentation needs. - Experienced with planning, estimating, organizing, and working on multiple projects. This role will be based out of Pune. Purpose of the role: To build and maintain systems that collect, store, process, and analyze data, such as data pipelines, data warehouses, and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities: - Build and maintenance of data architecture pipelines that enable the transfer and processing of durable, complete, and consistent data. - Design and implementation of data warehouses and data lakes that manage appropriate data volumes and velocity and adhere to required security measures. - Development of processing and analysis algorithms fit for the intended data complexity and volumes. - Collaboration with data scientists to build and deploy machine learning models. Analyst Expectations: - Will have an impact on the work of related teams within the area. - Partner with other functions and business areas. - Takes responsibility for end results of a team's operational processing and activities. - Escalate breaches of policies/procedure appropriately. - Take responsibility for embedding new policies/procedures adopted due to risk mitigation. - Advise and influence decision making within own area of expertise. - Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. - Deliver your work and areas of responsibility in line with relevant rules, regulations, and codes of conduct. - Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organization's products, services, and processes within the function. - Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organization sub-function. - Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. - Guide and persuade team members and communicate complex/sensitive information. - Act as a contact point for stakeholders outside of the immediate function, while building a network of contacts outside the team and external to the organization. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,
Posted 1 week ago
10.0 - 15.0 years
16 - 27 Lacs
Pune
Work from Office
Dear Candidate, This is with reference for Opportunity for Lead - AWS Data Engineering professionals PFB the Job Description Responsibilities: Lead the design and implementation of AWS-based data architectures and pipelines. Architect and optimize data solutions using AWS services such as S3, Redshift, Glue, EMR, and Lambda. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and ensure alignment with business goals. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in AWS data environments. Stay updated on the latest AWS technologies and industry trends. Key Technical Skills & Responsibilities Experience in design and development of cloud data platforms using AWS services Must have experience of design and development of data lake / data warehouse / data analytics solutions using AWS services like S3, Lake Formation, Glue, Athena, EMR, Lambda, Redshift Must be aware about the AWS access control and data security features like VPC, IAM, Security Groups, KMS etc Must be good with Python and PySpark for data pipeline building. Must have data modeling including S3 data organization experience Must have an understanding of hadoop components, No SQL database, graph database and time series database; and AWS services available for those technologies Must have experience of working with structured, semi-structured and unstructured data Must have experience of streaming data collection and processing. Kafka experience is preferred. Experience of migrating data warehouse / big data application to AWS is preferred . Must be able to use Gen AI services (like Amazon Q) for productivity gain Eligibility Criteria: Bachelors degree in computer science, Data Engineering, or a related field. Extensive experience with AWS data services and tools. AWS certification (e.g., AWS Certified Data Analytics - Specialty). Experience with machine learning and AI integration in AWS environments. Strong understanding of data modeling, ETL/ELT processes, and cloud integration. Proven leadership experience in managing technical teams. Excellent problem-solving and communication skills. Skill - Senior Tech Lead AWS Data Engg Location - Pune
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
Join us as a Data Engineer at Barclays, where you will spearhead the evolution of our infrastructure and deployment pipelines, driving innovation and operational excellence. You will harness cutting-edge technology to build and manage robust, scalable and secure infrastructure, ensuring seamless delivery of our digital solutions. To be successful as a Data Engineer, you should have experience with hands-on experience in Pyspark and a strong knowledge of Dataframes, RDD, and SparkSQL. You should also have hands-on experience in developing, testing, and maintaining applications on AWS Cloud. A strong hold on AWS Data Analytics Technology Stack (Glue, S3, Lambda, Lake formation, Athena) is essential. Additionally, you should be able to design and implement scalable and efficient data transformation/storage solutions using Snowflake. Experience in data ingestion to Snowflake for different storage formats such as Parquet, Iceberg, JSON, CSV, etc., is required. Familiarity with using DBT (Data Build Tool) with Snowflake for ELT pipeline development is necessary. Advanced SQL and PL SQL programming skills are a must. Experience in building reusable components using Snowflake and AWS Tools/Technology is highly valued. Exposure to data governance or lineage tools such as Immuta and Alation is an added advantage. Knowledge of Orchestration tools such as Apache Airflow or Snowflake Tasks is beneficial, and familiarity with Abinitio ETL tool is a plus. Some other highly valued skills may include the ability to engage with stakeholders, elicit requirements/user stories, and translate requirements into ETL components. A good understanding of infrastructure setup and the ability to provide solutions either individually or working with teams is essential. Knowledge of Data Marts and Data Warehousing concepts, along with good analytical and interpersonal skills, is required. Implementing Cloud-based Enterprise data warehouse with multiple data platforms along with Snowflake and NoSQL environment to build data movement strategy is also important. You may be assessed on key critical skills relevant for success in the role, such as risk and controls, change and transformation, business acumen, strategic thinking, digital and technology, as well as job-specific technical skills. The role is based out of Chennai. Purpose of the role: To build and maintain the systems that collect, store, process, and analyze data, such as data pipelines, data warehouses, and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities: - Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete, and consistent data. - Design and implementation of data warehouses and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. - Development of processing and analysis algorithms fit for the intended data complexity and volumes. - Collaboration with data scientists to build and deploy machine learning models. Analyst Expectations: - Meet the needs of stakeholders/customers through specialist advice and support. - Perform prescribed activities in a timely manner and to a high standard which will impact both the role itself and surrounding roles. - Likely to have responsibility for specific processes within a team. - Lead and supervise a team, guiding and supporting professional development, allocating work requirements, and coordinating team resources. - Demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. - Manage own workload, take responsibility for the implementation of systems and processes within own work area and participate in projects broader than the direct team. - Execute work requirements as identified in processes and procedures, collaborating with and impacting on the work of closely related teams. - Provide specialist advice and support pertaining to own work area. - Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. - Deliver work and areas of responsibility in line with relevant rules, regulations, and codes of conduct. - Maintain and continually build an understanding of how all teams in the area contribute to the objectives of the broader sub-function, delivering impact on the work of collaborating teams. - Continually develop awareness of the underlying principles and concepts on which the work within the area of responsibility is based, building upon administrative/operational expertise. - Make judgements based on practice and previous experience. - Assess the validity and applicability of previous or similar experiences and evaluate options under circumstances that are not covered by procedures. - Communicate sensitive or difficult information to customers in areas related specifically to customer advice or day-to-day administrative requirements. - Build relationships with stakeholders/customers to identify and address their needs. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,
Posted 2 weeks ago
8.0 - 13.0 years
0 - 1 Lacs
Chennai
Hybrid
Duties and Responsibilities Lead the design and implementation of scalable, secure, and high-performance solutions for data-intensive applications. Collaborate with stakeholders, other product development groups and software vendors to identify and define solutions for complex business and technical requirements. Develop and maintain cloud infrastructure using platforms such as AWS, Azure, or Google Cloud. Articulate technology solutions as well as explain the competitive advantages of various technology alternatives. Evangelize best practices to analytics teams Ensure data security, privacy, and compliance with relevant regulations. Optimize cloud resources for cost-efficiency and performance. Lead the migration of on-premises data systems to the cloud. Implement data storage, processing, and analytics solutions using cloud-native services. Monitor and troubleshoot cloud infrastructure and data pipelines. Stay updated with the latest trends and best practices in cloud computing and data management" Skills 5+ years of hands-on design and development experience in implementing Data Analytics applications using AWS Services such as S3, Glue, AWS Step Functions, Kinesis, Lambda, Lake Formation, Athena, Elastic Container Service/Elastic Kubernetes Service, Elastic Search, and Amazon EMR or Snowflake Experience with AWS services such as AWS IoT Greengrass, AWS IoT SiteWise, AWS IoT Core, AWS IoT Events-Strong understanding of cloud architecture principles and best practices. Proficiency in designing network topology, endpoints, application registration, network pairing Well verse with the access management in Azure or Cloud Experience with containerization technologies like Docker and Kubernetes. Expertise in CI/CD pipelines and version control systems like Git. Excellent problem-solving skills and attention to detail. Strong communication and leadership skills. Ability to work collaboratively with cross-functional teams and stakeholders. Knowledge of security and compliance standards related to cloud data platforms." Technical / Functional Skills Atleast 3+ years of experience in the implementation of all the Amazon Web Services (listed above) Atleast 3+ years of experience as a SAP BW Developer Atleast 3+ years of experience in Snowflake (or Redshift) Atleast 3+ years of experience as Data Integration Developer in Fivetran/HVR/DBT, Boomi (or Talend/Infomatica) Atleast 2+ years of experience with Azure Open AI, Azure AI Services, Microsoft CoPilot Studio, PowerBI, PowerAutomate Experience in Networking and Security Domain Expertise: 'Epxerience with SDLC/Agile/Scrum/Kanban. Project Experience Hands on experience in the end-to-end implementation of Data Analytics applications on AWS Hands on experience in the end to end implementation of SAP BW application for FICO, Sales & Distribution and Materials Management Hands on experience with Fivetran/HVR/Boomi in development of data integration services with data from SAP, SalesForce, Workday and other SaaS applications Hands on experience in the implementation of Gen AI use cases using Azure Services Hands on experience in the implementation of Advanced Analytics use cases using Python/R Certifications AWS Certified Solutions Architect - Professional
Posted 2 weeks ago
5.0 - 10.0 years
20 - 30 Lacs
Pune, Chennai, Bengaluru
Work from Office
Mandatory keyskills : Athena, Step Functions, Spark - Pyspark, ETL Fundamentals, SQL (Basic + Advanced), Glue, Python, Lambda, Data Warehousing, EBS /EFS, AWS EC2, Lake Formation, Aurora, S3, Modern Data Platform Fundamentals, PLSQL, Cloud front We are looking for an experienced AWS Data Engineer to design, build, and manage robust, scalable, and high-performance data pipelines and data platforms on AWS. The ideal candidate will have a strong foundation in ETL fundamentals, data modeling, and modern data architecture, with hands-on expertise across a broad spectrum of AWS services including Athena, Glue, Step Functions, Lambda, S3, and Lake Formation. Key Responsibilities : Design and implement scalable ETL/ELT pipelines using AWS Glue, Spark (PySpark), and Step Functions. Work with structured and semi-structured data using Athena, S3, and Lake Formation to enable efficient querying and access control. Develop and deploy serverless data processing solutions using AWS Lambda and integrate them into pipeline orchestration. Perform advanced SQL and PL/SQL development for data transformation, analysis, and performance tuning. Build data lakes and data warehouses using S3, Aurora, and Athena. Implement data governance, security, and access control strategies using AWS tools including Lake Formation, CloudFront, EBS/EFS, and IAM. Develop and maintain metadata, lineage, and data cataloging capabilities. Participate in data modeling exercises for both OLTP and OLAP environments. Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights. Monitor, debug, and optimize data pipelines for reliability and performance. Required Skills & Experience: Strong experience with AWS data services: Glue, Athena, Step Functions, Lambda, Lake Formation, S3, EC2, Aurora, EBS/EFS, CloudFront. Proficient in PySpark, Python, SQL (basic and advanced), and PL/SQL. Solid understanding of ETL/ELT processes and data warehousing concepts. Familiarity with modern data platform fundamentals and distributed data processing. Experience in data modeling (conceptual, logical, physical) for analytical and operational use cases. Experience with orchestration and workflow management tools within AWS. Strong debugging and performance tuning skills across the data stack.
Posted 1 month ago
5.0 - 10.0 years
22 - 37 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Experience: 5-8 Years (Lead-23 LPA), 8-10 Years (Senior Lead 35 LPA), 10+ Years (Architect- 42 LPA)- Max Location : Bangalore as 1 st preference , We can also go for Hyderabad, Chennai, Pune, Gurgaon Notice: Immediate to max 15 Days Joiner Mode of Work: Hybrid Job Description: Athena, Step Functions, Spark - Pyspark, ETL Fundamentals, SQL (Basic + Advanced), Glue, Python, Lambda, Data Warehousing, EBS /EFS, AWS EC2, Lake Formation, Aurora, S3, Modern Data Platform Fundamentals, PLSQL, Cloud front We are looking for an experienced AWS Data Engineer to design, build, and manage robust, scalable, and high-performance data pipelines and data platforms on AWS. The ideal candidate will have a strong foundation in ETL fundamentals, data modeling, and modern data architecture, with hands-on expertise across a broad spectrum of AWS services including Athena, Glue, Step Functions, Lambda, S3, and Lake Formation. Key Responsibilities: Design and implement scalable ETL/ELT pipelines using AWS Glue, Spark (PySpark), and Step Functions. Work with structured and semi-structured data using Athena, S3, and Lake Formation to enable efficient querying and access control. Develop and deploy serverless data processing solutions using AWS Lambda and integrate them into pipeline orchestration. Perform advanced SQL and PL/SQL development for data transformation, analysis, and performance tuning. Build data lakes and data warehouses using S3, Aurora, and Athena. Implement data governance, security, and access control strategies using AWS tools including Lake Formation, CloudFront, EBS/EFS, and IAM. Develop and maintain metadata, lineage, and data cataloging capabilities. Participate in data modeling exercises for both OLTP and OLAP environments. Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights. Monitor, debug, and optimize data pipelines for reliability and performance. Required Skills & Experience: Strong experience with AWS data services: Glue, Athena, Step Functions, Lambda, Lake Formation, S3, EC2, Aurora, EBS/EFS, CloudFront. Proficient in PySpark, Python, SQL (basic and advanced), and PL/SQL. Solid understanding of ETL/ELT processes and data warehousing concepts. Familiarity with modern data platform fundamentals and distributed data processing. Experience in data modeling (conceptual, logical, physical) for analytical and operational use cases. Experience with orchestration and workflow management tools within AWS. Strong debugging and performance tuning skills across the data stack.
Posted 1 month ago
9.0 - 12.0 years
14 - 18 Lacs
Bengaluru
Hybrid
Primary skills : PySpark , AWS Data Lake, Lake Formation Various AWS Data related services GLUE, Athena, Lambda, Kinesis 1. Professional experience as a data engineer: Min 10+ years 2. Experience designing and deploying end to end data lake solution in AWS. 3. Experience with migrating data from db2 to cloud 4. Experience with AWS ETL tools 5. Experience with AWS Databases (Redshift, DynamoDB) 6. Experience with Python, Kafka, Spark, Advance SQL 7. Technical mentoring experience & communication skills 8. Experience working with Atlassian suite (Confluence and Jira) Detailed Skills ----------------- # Must be able to design, develop, deploy and maintain data lake solution in AWS from scratch. Migration of db2 to AWS experience is necessary. # Strong hands-on experience with AWS ETL/File Movement tools for example GLUE, Athena, Lambda, Kinesis. # Proficient in Python, Spark, Kafka and advanced SQL # Proficient with at least two to three AWS database technologies (Redshift, Aurora, RDS, S3 & other AWS Data Service) covering security, policies, and access management. # Experience with one or more relational databases on AWS RDS such as SQL Server, NoSQL database such as DynamoDB and cloud data warehouse databases such as AWS Redshift # Good understanding of Agile Development, Cloud formation Template, AWS Code Built, AWS Code Pipeline # Understanding of Virtual Private Cloud environment including VPC, IAM, EC2 instances, Security groups, Network Access control, Elastic IPs, public & private subnets etc. # Familiarity in building scalable production systems with high availability using Elastic load balancers & Auto Scaling on different regions and availability zones. # Familiarity working with DevOps tools, deployment, and orchestration technologies
Posted 2 months ago
9 - 14 years
25 - 35 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Hiring Lead AWS Data Eng for Hyderabad/Chennai/Noida/Pune/Bangalore Location-Immediate to 15 days joiners Deep understanding of PII/PHI detection and privacy standards (HIPAA) Using AWS Glue, S3, Lake Formation, Kinesis, and DMS. nfrastructure with VPC segregation and IAM-based access. Monitoring with CloudWatch, X-Ray, and EventBridge. Expertise in RDS. Strong knowledge of containerized environments (ECS/EKS/VDS). Proficient in Python, PySpark, and infrastructure-as-code (CloudFormation or Terraform). Experience in lineage tracking, metadata cataloging, and cloud-native monitoring tools. Deep understanding of PII/PHI detection and privacy standards (HIPAA) is aplus
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough