Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 8.0 years
10 - 20 Lacs
Ahmedabad, Bengaluru, Mumbai (All Areas)
Work from Office
Designing, Developing, and Delivering scalable web applications using ASP.Net Core and Angular. Monitor cloud infrastructure (AWS), implement CI/CD pipelines. Strong hands-on experience with ASP.Net Core (MVC & Web API). Required Candidate profile ASP.Net Core Angular (latest versions preferred) AWS (Amazon Web Services) MongoDB Azure
Posted 2 weeks ago
7.0 - 12.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Position Summary: We are seeking a highly skilled ETL QA Engineer with at least 6 years of experience in ETL/data pipeline testing on the AWS cloud stack , specifically with Redshift, AWS Glue, S3 , and related data integration tools. The ideal candidate should be proficient in SQL , capable of reviewing and validating stored procedures , and should have the ability to automate ETL test cases using Python or suitable automation frameworks . Strong communication skills are essential, and web application testing exposure is a plus. Technical Skills Required: SQL Expertise : Ability to write, debug, and optimize complex SQL queries. Validate data across source systems, staging areas, and reporting layers. Experience with stored procedure review and validation. ETL Testing Experience : Hands-on experience with AWS Glue , Redshift , S3 , and data pipelines. Validate transformations, data flow accuracy, and pipeline integrity. ETL Automation : Ability to automate ETL tests using Python , PyTest , or other scripting frameworks. Nice to have exposure to TestNG , Selenium , or similar automation tools for testing UIs or APIs related to data validation. Cloud Technologies : Deep understanding of the AWS ecosystem , especially around ETL and data services. Familiarity with orchestration (e.g., Step Functions, Lambda), security, and logging. Health Check Automation : Build SQL and Python-based health check scripts to monitor pipeline sanity and data integrity. Reporting Tools (Nice to have): Exposure to tools like Jaspersoft , Tableau , Power BI , etc. for report layout and aggregation validation. Root Cause Analysis : Strong debugging skills to trace data discrepancies and report logical/data errors to development teams. Communication : Must be able to communicate clearly with both technical and non-technical stakeholders. Roles and Responsibilities Key Responsibilities: Design and execute test plans and test cases for validating ETL pipelines and data transformations. Ensure accuracy and integrity of data in transactional databases , staging zones , and data warehouses (Redshift) . Review stored procedures and SQL scripts to validate transformation logic. Automate ETL test scenarios using Python or other test automation tools as applicable. Implement health check mechanisms for automated validation of daily pipeline jobs. Investigate data issues and perform root cause analysis. Validate reports and dashboards, ensuring correct filters, aggregations, and visualizations. Collaborate with developers, analysts, and business teams to understand requirements and ensure complete test coverage. Report testing progress and results clearly and timely. Nice to Have: Web testing experience using Selenium or Appium. Experience in API testing and validation of data exposed via APIs.
Posted 2 weeks ago
8.0 - 12.0 years
30 - 40 Lacs
Pune
Work from Office
Assessment & Analysis Review CAST software intelligence reports to identify technical debt, architectural flaws, and cloud readiness. Conduct manual assessments of applications to validate findings and prioritize migration efforts. Identify refactoring needs (e.g., monolithic to microservices, serverless adoption). Evaluate legacy systems (e.g., .NET Framework, Java EE) for compatibility with AWS services. Solution Design Develop migration strategies (rehost, replatform, refactor, retire) for each application. Architect AWS-native solutions using services like EC2, Lambda, RDS, S3, and EKS. Design modernization plans for legacy systems (e.g., .NET Framework .NET Core, Java EE Spring Boot). Ensure compliance with AWS Well-Architected Framework (security, reliability, performance, cost optimization). Collaboration & Leadership Work with cross-functional teams (developers, DevOps, security) to validate designs. Partner with clients to align technical solutions with business objectives. Mentor junior architects and engineers on AWS best practices. Roles and Responsibilities Job Title: Senior Solution Architect - Cloud Migration & Modernization (AWS) Location: [Insert Location] Department: Digital Services Reports To: Cloud SL
Posted 2 weeks ago
3.0 - 5.0 years
4 - 8 Lacs
Ahmedabad
Work from Office
About the Role: Grade Level (for internal use): 09 S&P Global Market Intelligence The Role: Software Developer II (.Net Backend Developer) Grade ( relevant for internal applicants only )9 The Location: Ahmedabad, Gurgaon, Hyderabad The Team S&P Global Market Intelligence, a best-in-class sector-focused news and financial information provider, is looking for a Software Developer to join our Software Development team in our India offices. This is an opportunity to work on a self-managed team to maintain, update, and implement processes utilized by other teams. Coordinate with stakeholders to design innovative functionality in existing and future applications. Work across teams to enhance the flow of our data. Whats in it for you This is the place to hone your existing skills while having the chance to be exposed to fresh and divergent technologies. Exposure to work on the latest, cutting-edge technologies in the full stack eco system. Opportunity to grow personally and professionally. Exposure in working on AWS Cloud solutions will be added advantage. Responsibilities Identify, prioritize, and execute tasks in Agile software development environment. Develop solutions to develop/support key business needs. Engineer components and common services based on standard development models, languages and tools. Produce system design documents and participate actively in technical walkthroughs. Demonstrate a strong sense of ownership and responsibility with release goals. This includes understanding requirements, technical specifications, design, architecture, implementation, unit testing, builds/deployments, and code management. Build and maintain the environment for speed, accuracy, consistency and up time. Collaborate with team members across the globe. Interface with users, business analysts, quality assurance testers and other teams as needed. What Were Looking For Basic Qualifications: Bachelor's/Masters degree in computer science, Information Systems or equivalent. 3-5 years of experience. Solid experience with building processes; debugging, refactoring, and enhancing existing code, with an understanding of performance and scalability. Competency in C#, .NET, .NET CORE. Experience with DevOps practices and modern CI/CD deployment models using Jenkins Experience supporting production environments Knowledge of T-SQL and MS SQL Server Exposure to Python/scala/AWS technologies is a plus Exposure to React/Angular is a plus Preferred Qualifications: Exposure to DevOps practices and CI/CD pipelines such as Azure DevOps or GitHub Actions. Familiarity with automated unit testing is advantageous. Exposure in working on AWS Cloud solutions will be added to an advantage.
Posted 2 weeks ago
4.0 - 6.0 years
6 - 8 Lacs
Hyderabad
Work from Office
What you will do In this vital role you will be responsible for designing, building, and maintaining scalable, secure, and reliable AWS cloud infrastructure. This is a hands-on engineering role requiring deep expertise in Infrastructure as Code (IaC), automation, cloud networking, and security . The ideal candidate should have strong AWS knowledge and be capable of writing and maintaining Terraform, CloudFormation, and CI/CD pipelines to streamline cloud deployments. Please note, this is an onsite role based in Hyderabad. Roles & Responsibilities: AWS Infrastructure Design & Implementation Architect, implement, and manage highly available AWS cloud environments . Design VPCs, Subnets, Security Groups, and IAM policies to enforce security standard processes. Optimize AWS costs using reserved instances, savings plans, and auto-scaling . Infrastructure as Code (IaC) & Automation Develop, maintain, and enhance Terraform & CloudFormation templates for cloud provisioning. Automate deployment, scaling, and monitoring using AWS-native tools & scripting. Implement and manage CI/CD pipelines for infrastructure and application deployments. Cloud Security & Compliance Enforce standard processes in IAM, encryption, and network security. Ensure compliance with SOC2, ISO27001, and NIST standards. Implement AWS Security Hub, GuardDuty, and WAF for threat detection and response. Monitoring & Performance Optimization Set up AWS CloudWatch, Prometheus, Grafana, and logging solutions for proactive monitoring. Implement autoscaling, load balancing, and caching strategies for performance optimization. Solve cloud infrastructure issues and conduct root cause analysis. Collaboration & DevOps Practices Work closely with software engineers, SREs, and DevOps teams to support deployments. Maintain GitOps standard processes for cloud infrastructure versioning. Support on-call rotation for high-priority cloud incidents. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree and 4 to 6 years of experience in computer science, IT, or related field with hands-on cloud experience OR Bachelors degree and 6 to 8 years of experience in computer science, IT, or related field with hands-on cloud experience OR Diploma and 10 to 12 years of experience in computer science, IT, or related field with hands-on cloud experience Must-Have Skills: Deep hands-on experience with AWS (EC2, S3, RDS, Lambda, VPC, IAM, ECS/EKS, API Gateway, etc.) . Expertise in Terraform & CloudFormation for AWS infrastructure automation. Strong knowledge of AWS networking (VPC, Direct Connect, Transit Gateway, VPN, Route 53) . Experience with Linux administration, scripting (Python, Bash), and CI/CD tools (Jenkins, GitHub Actions, CodePipeline, etc.) . Strong troubleshooting and debugging skills in cloud networking, storage, and security . Preferred Qualifications: Good-to-Have Skills: Experience with Kubernetes (EKS) and service mesh architectures . Knowledge of AWS Lambda and event-driven architectures . Familiarity with AWS CDK, Ansible, or Packer for cloud automation. Exposure to multi-cloud environments (Azure, GCP) . Familiarity with HPC, DGX Cloud . Professional Certifications (preferred): AWS Certified Solutions Architect Associate or Professional AWS Certified DevOps Engineer Professional Terraform Associate Certification Soft Skills: Strong analytical and problem-solving skills. Ability to work effectively with global, virtual teams Effective communication and collaboration with cross-functional teams. Ability to work in a fast-paced, cloud-first environment.
Posted 2 weeks ago
1.0 - 3.0 years
3 - 6 Lacs
Hyderabad
Work from Office
We are seeking an MDM Associate Analyst with 2 5 years of development experience to support and enhance our enterprise MDM (Master Data Management) platforms using Informatica/Reltio. This role is critical in delivering high-quality master data solutions across the organization, utilizing modern tools like Databricks and AWS to drive insights and ensure data reliability. The ideal candidate will have strong SQL, data profiling, and experience working with cross-functional teams in a pharma environment. To succeed in this role, the candidate must have strong experience on MDM (Master Data Management) on configuration (L3 Configuration, Assets creati on, Data modeling etc ) , ETL and data mappings (CAI, CDI ) , data mastering (Match/Merge and Survivorship rules) , source and target integrations ( RestAPI , Batch integration, Integration with Databricks tables etc ) Roles & Responsibilities: Analyze and manage customer master data using Reltio or Informatica MDM solutions. Perform advanced SQL queries and data analysis to validate and ensure master data integrity. Leverage Python, PySpark, and Databricks for scalable data processing and automation. Collaborate with business and data engineering teams for continuous improvement in MDM solutions. Implement data stewardship processes and workflows, including approval and DCR mechanisms. Utilize AWS cloud services for data storage and compute processes related to MDM. Contribute to metadata and data modeling activities. Track and manage data issues using tools such as JIRA and document processes in Confluence. Apply Life Sciences/Pharma industry context to ensure data standards and compliance. Basic Qualifications and Experience: Masters degree with 1 - 3 years of experience in Business, Engineering, IT or related field OR Bachelors degree with 2 - 5 years of experience in Business, Engineering, IT or related field OR Diploma with 6 - 8 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Strong experience with Informatica or Reltio MDM platforms in building configurations from scratch (Like L3 configuration or Data modeling, Assets creations, Setting up API integrations, Orchestration) Strong experience in building data mappings, data profiling, creating and implementation business rules for data quality and data transformation Strong experience in implementing match and merge rules and survivorship of golden records Expertise in integrating master data records with downstream systems Very good understanding of DWH basics and good knowledge on data modeling Experience with IDQ, data modeling and approval workflow/DCR. Advanced SQL expertise and data wrangling. Exposure to Python and PySpark for data transformation workflows. Knowledge of MDM, data governance, stewardship, and profiling practices. Good-to-Have Skills: Familiarity with Databricks and AWS architecture. Background in Life Sciences/Pharma industries. Familiarity with project tools like JIRA and Confluence. Basics of data engineering concepts. Professional Certifications : Any ETL certification (e.g. Informatica) Any Data Analysis certification (SQL, Python, Databricks) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 weeks ago
5.0 - 10.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Title : AWS, SQL, Snowflake, ControlM, ServiceNow - Operational Engineer (Weekend on call) Req ID: 325686 We are currently seeking a AWS, SQL, Snowflake, ControlM, ServiceNow - Operational Engineer (Weekend on call) to join our team in Bangalore, Karntaka (IN-KA), India (IN). Minimum Experience on Key Skills - 5 to 10 years Skills: AWS, SQL, Snowflake, ControlM, ServiceNow - Operational Engineer (Weekend on call) We looking for operational engineer who is ready to work on weekends for oncall as primary criteria. Skills we look for AWS cloud (SQS, SNS, , DynomoDB, EKS), SQL (postgress, cassendra), snowflake, ControlM/Autosys/Airflow, ServiceNow, Datadog, Splunk, Grafana, python/shell scripting.
Posted 2 weeks ago
5.0 - 10.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Req ID: 306668 We are currently seeking a Cloud Solution Delivery Sr Advisor to join our team in Bangalore, Karntaka (IN-KA), India (IN). Position Overview We are seeking a highly skilled and experienced Lead Data Engineer to join our dynamic team. The ideal candidate will have a strong background in implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies; leading teams and directing engineering workloads. This role requires a deep understanding of data engineering, cloud services, and the ability to implement high quality solutions. Key Responsibilities Lead and direct a small team of engineers engaged in - Engineer end-to-end data solutions using AWS services, including Lambda, S3, Snowflake, DBT, Apache Airflow - Cataloguing data - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions - Providing best in class documentation for downstream teams to develop, test and run data products built using our tools - Testing our tooling, and providing a framework for downstream teams to test their utilisation of our products - Helping to deliver CI, CD and IaC for both our own tooling, and as templates for downstream teams - Use DBT projects to define re-usable pipelines Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 5+ years of experience in data engineering - 2+ years of experience inleading a team of data engineers - Experience in AWS cloud services - Expertise with Python and SQL - Experience of using Git / Github for source control management - Experience with Snowflake - Strong understanding of lakehouse architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders - Strong use of version control and proven ability to govern a team in the best practice use of version control - Strong understanding of Agile and proven ability to govern a team in the best practice use of Agile methodologies Preferred Skills and Qualifications - An understanding of Lakehouses - An understanding of Apache Iceberg tables - An understanding of data cataloguing - Knowledge of Apache Airflow for data orchestration - An understanding of DBT - SnowPro Core certification
Posted 2 weeks ago
2.0 - 7.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Req ID: 324959 We are currently seeking a L1 Cloud Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Cloud Platform / Infrastructure Engineer - Grade 6 - At NTT DATA, we know that with the right people on board, anything is possible. The quality, integrity, and commitment of our employees are key factors in our company"™s growth, market presence and our ability to help our clients stay a step ahead of the competition. By hiring, the best people and helping them grow both professionally and personally, we ensure a bright future for NTT DATA and for the people who work here. Preferred Experience As an L1 cloud engineer, should have good understanding of cloud platform, networking, and storage principles with focus on Azure. Cloud administration, maintenance, and troubleshooting experience. Monitor cloud and infrastructure services to ensure uninterrupted operations. Monitor and manage support tickets during assigned shifts, ensuring timely and accurate resolution of issues. Respond to alerts and incidents, escalating to higher-level support as necessary. Able to provide shift hours support at L1 level Experience in updating KB articles and SOPs. Request additional information from clients, when necessary, to accurately diagnose and resolve issues. Acknowledge and analyse client emails to identify and understand issues. Provide clear guidance and relevant information to resolve first-level issues. Escalate complex issues to the internal L2 team and track the progress of these escalations to ensure prompt resolution. Well experienced in handling incident, service requests, change requests. Passion for delivering timely and outstanding customer service Great written and oral communication skills with internal and external customers Basic Qualifications 2+ years of overall operational experience 2+ years of Azure/AWS experience 2+ years of experience working in a diverse cloud support environment in a 24*7 production support model Preferred Certifications Azure Fundamentals AWS Cloud Practitioner Four Year BS/BA in Information Technology degree or equivalent experience
Posted 2 weeks ago
6.0 - 10.0 years
8 - 18 Lacs
Gurugram
Work from Office
Key Responsibilities and Requirements: More than 6 years of experience in backend /API testing for a payment system Develop comprehensive test strategies, plans, and test cases for complex backend Payment systems Backend efforts using Python, PyTest, Java, or similar languages. Automate API and Unit tests and integrate them with CI/CD pipelines for seamless deployments Deep knowledge of API and HTTP protocols, message queues, lambda functions, and AWS services. Understanding Linux, Cloud platforms (AWS), and CDNs is preferred, as well as familiarity with AWS log analysis. Work experience with Payment system testing is preferred.
Posted 2 weeks ago
7.0 - 12.0 years
1 - 5 Lacs
Bengaluru
Work from Office
Req ID: 325298 We are currently seeking a AWS Redshift administrator Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Job Duties: "¢ Administer and maintain scalable cloud environments and applications for data organization. "¢ Understanding business objectives of the company and creating cloud-based solutions to facilitate those objectives. "¢ Implement Infrastructure as Code and deploy code using Terraform, Gitlab "¢ Install and maintain software, services, and application by identifying system requirements. "¢ Hands-on AWS Services and DB and Server troubleshooting experience. "¢ Extensive database experience with RDS, AWS Redshift, MySQL "¢ Maintains environment by identifying system requirements, installing upgrades and monitoring system performance. "¢ Knowledge of day-to-day database operations, deployments, and development "¢ Experienced in Snowflake "¢ Knowledge of SQL and Performance tuning "¢ Knowledge of Linux Shell Scripting or Python "¢ Migrate system from one AWS cloud to another AWS account "¢ Hands-on DB and Server troubleshooting experience "¢ Maintains system performance by performing system monitoring and analysis and performance tuning. "¢ Troubleshooting system hardware, software, and operating and system management systems. "¢ Secures web system by developing system access, monitoring, control, and evaluation. "¢ Testing disaster recovery policies and procedures; completing back-ups; and maintaining documentation. "¢ Upgrades system and services and developing, testing, evaluating, and installing enhancements and new software. "¢ Communicating with internal teams, like EIMO, Operations, and Cloud Architect "¢ Communicate with stakeholders and build applications to meet project needs. Minimum Skills Required: "¢ Bachelor"™s degree in computer science or engineering "¢ Minimum of 7 years of experience in System, platform, and AWS cloud administration "¢ Minimum of 5 to 7 years of Database administration and AWS experience using latest AWS technologies "“ AWS EC2, Redshift, VPC, S3, AWS RDS "¢ Experience with Java, Python, Redshift, MySQL, or equivalent database tools "¢ Experience with Agile software development using JIRA "¢ Experience in multiple OS platforms with strong emphasis on Linux and Windows systems "¢ Experience with OS-level scripting environment such as KSH shell., PowerShell "¢ Experience with version management tools and CICD pipeline "¢ In-depth knowledge of the TCP / IP protocol suite, security architecture, securing and hardening Operating Systems, Networks, Databases and Applications. "¢ Advanced SQL knowledge and experience working with relational databases, query authoring (SQL) , query performance tuning. "¢ Experience supporting and optimizing data pipelines and data sets. "¢ Knowledge of the Incident Response life cycle "¢ AWS solution architect certifications. "¢ Strong written and verbal communication skills.
Posted 2 weeks ago
1.0 - 6.0 years
1 - 5 Lacs
Bengaluru
Work from Office
Req ID: 328302 We are currently seeking a AWS Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Job Title: Digital Engineering Sr Associate NTT DATA Services strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Digital Engineering Lead Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Basic Qualifications 1 years' experience in AWS Infra Preferred Experience Excellent communication and collaboration skills. AWS certifications are preferred. Expertise in AWS cloud EC2, creating, Managing, Patching, trouble shooting. Good Knowledge on Access and Identity Management Monitoring Tools - CloudWatch (New Relic/other monitoring), logging AWS Storage "“ EBS, EFS, S3, Glacier, Adding the disk, extending the disk. AWS backup and restoration Strong understanding of networking concepts to create VPC, Subnets, ACL, Security Groups, and security best practices in cloud environments. Knowledge of PaaS to IaaS migration strategies Scripting experience (must be fluent in a scripting language such as Python) Detail-oriented self-starter capable of working independently. Knowledge of IaaC Terraform and best practice. Experience with container orchestration utilizing ECS, EKS, Kubernetes, or Docker Swarm Experience with one or more of the following Configuration Management ToolsAnsible, Chef, Salt, Puppet infrastructure, networking, AWS databases. Familiarity with containerization and orchestration tools, such as Docker and Kubernetes. Bachelor"™s degree in computer science or a related field Any of the AWS Associate Certifications GCP Knowledge Cloud IAM , Resource Manager , Multi-factor Authentication , Cloud KMS Cloud Billing , Cloud Console , Stackdriver Cloud SQL, Cloud Spanner SQL, Cloud Bigtable Cloud Run Container services, Kubernetes Engine (GKE) , Anthos Service Mesh , Cloud Functions , PowerShell on GCP Ideal Mindset Lifelong Learner. You are always seeking to improve your technical and nontechnical skills. Team Player. You are someone who wants to see everyone on the team succeed and is willing to go the extra mile to help a teammate in need. Listener. You listen to the needs of the customer and make those the priority throughout development.
Posted 2 weeks ago
3.0 - 8.0 years
5 - 9 Lacs
Noida, Bengaluru
Work from Office
Req ID: 304647 We are currently seeking a AWS Lead Engineer to join our team in Remote, Karntaka (IN-KA), India (IN). Basic Qualifications 3 years' experience in AWS Infra Preferred Experience Excellent communication and collaboration skills. AWS certifications are preferred. Expertise in AWS cloud EC2, creating, Managing, Patching, trouble shooting. Good Knowledge on Access and Identity Management Monitoring Tools - CloudWatch (New Relic/other monitoring), logging AWS Storage "“ EBS, EFS, S3, Glacier, Adding the disk, extending the disk. AWS backup and restoration Strong understanding of networking concepts to create VPC, Subnets, ACL, Security Groups, and security best practices in cloud environments. Strong knowledge of PaaS to IaaS migration strategies Scripting experience (must be fluent in a scripting language such as Python) Detail-oriented self-starter capable of working independently. Knowledge of IaaC Terraform and best practice. Experience with container orchestration utilizing ECS, EKS, Kubernetes, or Docker Swarm Experience with one or more of the following Configuration Management ToolsAnsible, Chef, Salt, Puppet infrastructure, networking, AWS databases. Familiarity with containerization and orchestration tools, such as Docker and Kubernetes. Bachelor"™s degree in computer science or a related field Any of the AWS Associate Certifications Ideal Mindset Lifelong Learner. You are always seeking to improve your technical and nontechnical skills. Team Player. You are someone who wants to see everyone on the team succeed and is willing to go the extra mile to help a teammate in need. Listener. You listen to the needs of the customer and make those the priority throughout development.
Posted 2 weeks ago
1.0 - 6.0 years
1 - 5 Lacs
Noida, Chennai, Bengaluru
Work from Office
Req ID: 328301 We are currently seeking a AWS Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Job Title: Digital Engineering Sr Associate NTT DATA Services strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Digital Engineering Lead Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Basic Qualifications 1 years' experience in AWS Infra Preferred Experience Excellent communication and collaboration skills. AWS certifications are preferred. Expertise in AWS cloud EC2, creating, Managing, Patching, trouble shooting. Good Knowledge on Access and Identity Management Monitoring Tools - CloudWatch (New Relic/other monitoring), logging AWS Storage "“ EBS, EFS, S3, Glacier, Adding the disk, extending the disk. AWS backup and restoration Strong understanding of networking concepts to create VPC, Subnets, ACL, Security Groups, and security best practices in cloud environments. Knowledge of PaaS to IaaS migration strategies Scripting experience (must be fluent in a scripting language such as Python) Detail-oriented self-starter capable of working independently. Knowledge of IaaC Terraform and best practice. Experience with container orchestration utilizing ECS, EKS, Kubernetes, or Docker Swarm Experience with one or more of the following Configuration Management ToolsAnsible, Chef, Salt, Puppet infrastructure, networking, AWS databases. Familiarity with containerization and orchestration tools, such as Docker and Kubernetes. Bachelor"™s degree in computer science or a related field Any of the AWS Associate Certifications GCP Knowledge Cloud IAM , Resource Manager , Multi-factor Authentication , Cloud KMS Cloud Billing , Cloud Console , Stackdriver Cloud SQL, Cloud Spanner SQL, Cloud Bigtable Cloud Run Container services, Kubernetes Engine (GKE) , Anthos Service Mesh , Cloud Functions , PowerShell on GCP Ideal Mindset Lifelong Learner. You are always seeking to improve your technical and nontechnical skills. Team Player. You are someone who wants to see everyone on the team succeed and is willing to go the extra mile to help a teammate in need. Listener. You listen to the needs of the customer and make those the priority throughout development.
Posted 2 weeks ago
7.0 - 12.0 years
16 - 20 Lacs
Pune
Work from Office
Req ID: 301930 We are currently seeking a Digital Solution Architect Lead Advisor to join our team in Pune, Mahrshtra (IN-MH), India (IN). Position Overview We are seeking a highly skilled and experienced Data Solution Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. Key Responsibilities - Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS - Design and implement data streaming pipelines using Kafka/Confluent Kafka - Develop data processing applications using Python - Ensure data security and compliance throughout the architecture - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions - Optimize data flows for performance, cost-efficiency, and scalability - Implement data governance and quality control measures - Provide technical leadership and mentorship to development teams - Stay current with emerging technologies and industry trends Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 7+ years of experience in data architecture and engineering - Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS - Proficiency in Kafka/Confluent Kafka and Python - Experience with Synk for security scanning and vulnerability management - Solid understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders
Posted 2 weeks ago
5.0 - 10.0 years
13 - 17 Lacs
Pune
Work from Office
Req ID: 301172 We are currently seeking a Cloud Solution Delivery Lead Consultant to join our team in Pune, Mahrshtra (IN-MH), India (IN). Location - Remote AWS Lead Engineer will be required to design and build the cloud foundations platform. Translating project-specific needs into a cloud structure, design the cloud environment when required that covers all requirements with appropriate weightage given to the security aspect. Carryout deployment and integration of application in the designed cloud environment. Understand needs of business / client and implement cloud strategies those meet the needs. The candidate will also need good experience around software development principles, IaC and Github as devops tooling. Provide the necessary design to the team for building cloud infrastructure solutions, train and guide the team in provisioning/using/integrating the cloud services proposed in the design. Skills: Must have's 5+ years Proficient experience with AWS Cloud(AWS Core) 3+ years' relevant experience working on design cloud Infrastructure solution and cloud account migration Proficient in Cloud Networking and network configuration . Proficient in Terraform for managing Infrastructure as code (module based provisioning of infra, connectivity, provisioning of data services, monitoring services) Proficient in Github and Implementing CI/CD for infrastructure using IaC with Github Actions. AWS-CLI Have experience working with these AWS services: IAM Accounts, IAM Users & Groups, IAM Roles, Access Control RBAC, ABAC, Compute (EC2 and types and Costing), Storage (EBS, EFS,S3 etc), VPC, VPC Peering, Security Groups, Notification & Queue services, NACL, Auto Scaling Groups, CloudWatch, DNS, Application Load Balancer, Directory Services and Identity Federation, AWS Organizations and Control Tower, AWS Tagging Configuration, Certificate Management MVP Monitoring tool such as Amazon CloudWatch & hands-on with CloudWatch Logs. Examples of daily activities such as: - Account provisioning support - Policy Provisioning - Network support - Resource deployment support - Incident Support on daily work - Security Incident support DevOps experience o Github and github actions o Terraform o Python language o Go Language o Grafana o ArgoCD Nice to have's - Docker, - Kubernetes Able to work with Imperative and Declarative way to setup Kubernetes resources/Services
Posted 2 weeks ago
7.0 - 12.0 years
13 - 18 Lacs
Bengaluru
Work from Office
We are currently seeking a Lead Data Architect to join our team in Bangalore, Karntaka (IN-KA), India (IN). Position Overview We are seeking a highly skilled and experienced Data Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. Key Responsibilities - Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS, Kafka and Confluent, all within a larger and overarching programme ecosystem - Architect data processing applications using Python, Kafka, Confluent Cloud and AWS - Ensure data security and compliance throughout the architecture - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions - Optimize data flows for performance, cost-efficiency, and scalability - Implement data governance and quality control measures - Ensure delivery of CI, CD and IaC for NTT tooling, and as templates for downstream teams - Provide technical leadership and mentorship to development teams and lead engineers - Stay current with emerging technologies and industry trends Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 7+ years of experience in data architecture and engineering - Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS - Strong experience with Confluent - Strong experience in Kafka - Solid understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders - Knowledge of Apache Airflow for data orchestration Preferred Qualifications - An understanding of cloud networking patterns and practises - Experience with working on a library or other long term product - Knowledge of the Flink ecosystem - Experience with Terraform - Deep experience with CI/CD pipelines - Strong understanding of the JVM language family - Understanding of GDPR and the correct handling of PII - Expertise with technical interface design - Use of Docker Responsibilities - Design and implement scalable data architectures using AWS services, Confluent and Kafka - Develop data ingestion, processing, and storage solutions using Python and AWS Lambda, Confluent and Kafka - Ensure data security and implement best practices using tools like Synk - Optimize data pipelines for performance and cost-efficiency - Collaborate with data scientists and analysts to enable efficient data access and analysis - Implement data governance policies and procedures - Provide technical guidance and mentorship to junior team members - Evaluate and recommend new technologies to improve data architecture
Posted 2 weeks ago
4.0 - 9.0 years
4 - 8 Lacs
Noida
Work from Office
Req ID: 313916 We are currently seeking a Alation Admin or MSTR Cloud Admin to join our team in NOIDA, Uttar Pradesh (IN-UP), India (IN). Alation Admin Also known as an Alation Data Catalog Administrator Responsible for managing and maintaining the Alation Data Catalog, a platform that helps organizations discover, understand, and govern their data assets. 1. Platform Administration Installing, configuring, and maintaining the Alation Data Catalog platform to ensure its optimal performance and reliability. 2. User Management Managing user access, permissions, and roles within Alation, ensuring proper authentication and authorization for data access. 3. Data Governance Implementing and enforcing data governance policies, including data classification, data lineage, and data stewardship, to maintain data quality and compliance. 4. Data Catalog Management Curating and organizing metadata and data assets within the catalog, ensuring accurate and up-to-date information is available to users. 5. Integration: Collaborating with other IT teams to integrate Alation with data sources, databases, data lakes, and other data management systems. 6. Metadata Management Overseeing the extraction and ingestion of metadata from various data sources into Alation, including data dictionaries, business glossaries, and technical metadata. 7. Security Implementing and maintaining security measures, such as encryption, access controls, and auditing, to protect sensitive data and catalog information. 8. Training and Support Providing training to users on how to effectively use the Alation Data Catalog and offering support for catalog-related inquiries and issues. 9. Data Discovery Assisting users in discovering and accessing data assets within the catalog, promoting self-service data discovery. 10. Collaboration Collaborating with data owners, data stewards, and data users to understand their data needs and ensure the catalog meets those requirements. 11. Performance Monitoring Monitoring the performance of the Alation Data Catalog platform, identifying and resolving issues to ensure optimal functionality. 12. Upgrades and Maintenance Planning and executing platform upgrades and applying patches to stay up to date with Alation releases. 13. Documentation Maintaining documentation for catalog configurations, processes, and best practices. 14. Reporting and Analytics Generating reports and insights from Alation to track data usage, data lineage, and user activity. 15. Data Quality Monitoring and improving data quality within the catalog and assisting in data quality initiatives. 16. Stay Current Staying informed about Alation updates, new features, and industry best practices in data catalog administration. An Alation Admin plays a critical role in enabling organizations to effectively manage their data assets, foster data collaboration, and ensure data governance and compliance across the enterprise. --------------------------- MicroStrategy Cloud Admin Minimum 4+ years of MSTR Administration with following core attributes "“ Hands-on maintenance and administration experience in MicroStrategy 10.x Business Intelligence product suite, AWS Cloud platform Experience on enterprise portal integration, mobile integration, write back to source data based on analysis by business users, alerts via mail, mobile based on pre-defined events Ability to define and review complex Metric Ability to architect MSTR Cubes for solving complex business problems Good conceptual knowledge and working experience on meta data creation framework models universe etc., creating report specifications, integration test planning & testing, unit test planning & testing, UAT & implementation support Strong knowledge of quality processes SDLC, Review, Test, Configuration Management, Release Management, Defect Prevention Knowledge of Database is essential and ability to review the SQL passes and make decisions based on query timings Good experience on MicroStrategy upgrade, configurations on Linux Have hands on experience on creating MSTR deployment packages, Command manager scripts setup and maintain proper object and data security working experience of Configure, maintain, and administer multiple environments Open to work in different shifts as per project need Excellent Communication Skills (Written, Verbal, team work and issue resolution) Activities: Provide support for MicroStrategy"™s Business Intelligence product suite, AWS Cloud platform, and its underlying technologies Use your strong communication skills Resolve application and infrastructure situations as they arise Perform day to day management of the MicroStrategy Cloud infrastructure (on AWS) including alert monitoring/remediation, change management, incident troubleshooting and resolution. Participate in scheduled and emergency infrastructure maintenance activities Collaborate and communicate effectively with peers, internal application and software development teams Maintain high quality documentation for all related tasks Work in a strong team environment Independently manage Production MSTR environment (and associated lower environments as Dev, UAT) Manage upgrades and vulnerabilities
Posted 2 weeks ago
5.0 - 10.0 years
6 - 11 Lacs
Bengaluru
Work from Office
Req ID: 306669 We are currently seeking a Lead Data Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Position Overview We are seeking a highly skilled and experienced Lead Data/Product Engineer to join our dynamic team. The ideal candidate will have a strong background in streaming services and AWS cloud technology, leading teams and directing engineering workloads. This is an opportunity to work on the core systems supporting multiple secondary teams, so a history in software engineering and interface design would be an advantage. Key Responsibilities Lead and direct a small team of engineers engaged in - Engineering reuseable assets for the later build of data products - Building foundational integrations with Kafka, Confluent Cloud and AWS - Integrating with a large number of upstream and downstream technologies - Providing best in class documentation for downstream teams to develop, test and run data products built using our tools - Testing our tooling, and providing a framework for downstream teams to test their utilisation of our products - Helping to deliver CI, CD and IaC for both our own tooling, and as templates for downstream teams Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 5+ years of experience in data engineering - 3+ years of experience with real time (or near real time) streaming systems - 2+ years of experience leading a team of data engineers - A willingness to independently learn a high number of new technologies and to lead a team in learning new technologies - Experience in AWS cloud services, particularly Lambda, SNS, S3, and EKS, API Gateway - Strong experience with Python - Strong experience in Kafka - Excellent understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts both directly and through documentation - Strong use of version control and proven ability to govern a team in the best practice use of version control - Strong understanding of Agile and proven ability to govern a team in the best practice use of Agile methodologies Preferred Skills and Qualifications - An understanding of cloud networking patterns and practises - Experience with working on a library or other long term product - Knowledge of the Flink ecosystem - Experience with terraform - Experience with CI pipelines - Ability to code in a JVM language - Understanding of GDPR and the correct handling of PII - Knowledge of technical interface design - Basic use of Docker
Posted 2 weeks ago
5.0 - 8.0 years
0 - 0 Lacs
Hyderabad
Work from Office
Business Analyst : Must have Strong Communication with Good and quick understanding of Product Vision and Domain. AWS Cloud knowledge is good to have Location : Hyderabad Experience - 5 to 8 Notice Period : Immediate joiners
Posted 2 weeks ago
6.0 - 10.0 years
13 - 20 Lacs
Bengaluru
Hybrid
Job Description -6+ years of experience in backend development using Java. -Strong expertise in Spring Boot, Spring Cloud, and building Microservices. -Experience with REST APIs, JSON, and API integration. -Good knowledge of AWS services for deployment, storage, and compute. -Familiarity with CI/CD pipelines and tools like Jenkins, Git, Maven/Gradle. -Understanding of containerization using Docker and orchestration with Kubernetes (nice to have). -Experience with relational and NoSQL databases (e.g., MySQL, PostgreSQL, DynamoDB, MongoDB). -Solid understanding of application performance monitoring and logging tools.
Posted 2 weeks ago
8.0 - 13.0 years
20 - 35 Lacs
Hyderabad
Remote
Databricks Administrator Azure/AWS | Remote | 6+ Years Job Description: We are seeking an experienced Databricks Administrator with 6+ years of expertise in managing and optimizing Databricks environments. The ideal candidate should have hands-on experience with Azure/AWS Databricks , cluster management, security configurations, and performance optimization. This role requires close collaboration with data engineering and analytics teams to ensure smooth operations and scalability. Key Responsibilities: Deploy, configure, and manage Databricks workspaces, clusters, and jobs . Monitor and optimize Databricks performance, auto-scaling, and cost management . Implement security best practices , including role-based access control (RBAC) and encryption. Manage Databricks integration with cloud storage (Azure Data Lake, S3, etc.) and other data services . Automate infrastructure provisioning and management using Terraform, ARM templates, or CloudFormation . Troubleshoot Databricks runtime issues, job failures, and performance bottlenecks . Support CI/CD pipelines for Databricks workloads and notebooks. Collaborate with data engineering teams to enhance ETL pipelines and data processing workflows . Ensure compliance with data governance policies and regulatory requirements . Maintain and upgrade Databricks versions and libraries as needed. Required Skills & Qualifications: 6+ years of experience as a Databricks Administrator or in a similar role. Strong knowledge of Azure/AWS Databricks and cloud computing platforms . Hands-on experience with Databricks clusters, notebooks, libraries, and job scheduling . Expertise in Spark optimization, data caching, and performance tuning . Proficiency in Python, Scala, or SQL for data processing. Experience with Terraform, ARM templates, or CloudFormation for infrastructure automation. Familiarity with Git, DevOps, and CI/CD pipelines . Strong problem-solving skills and ability to troubleshoot Databricks-related issues. Excellent communication and stakeholder management skills. Preferred Qualifications: Databricks certifications (e.g., Databricks Certified Associate/Professional). Experience in Delta Lake, Unity Catalog, and MLflow . Knowledge of Kubernetes, Docker, and containerized workloads . Experience with big data ecosystems (Hadoop, Apache Airflow, Kafka, etc.). Email : Hrushikesh.akkala@numerictech.com Phone /Whatsapp : 9700111702 For immediate response and further opportunities, connect with me on LinkedIn: https://www.linkedin.com/in/hrushikesh-a-74a32126a/
Posted 2 weeks ago
5.0 - 8.0 years
7 - 11 Lacs
Chennai
Work from Office
About The Role Azure Engineer JD : Location - CHENNAI Rates including MARK UP - 170 K/M - 190 K/M Evaluate Zayo's current landscape for gaps and readiness Gather and analyze business and technical requirements to design cloud estate. Identify risks and develop strategies for a smooth cloud transition. Collaborate with onsite and offshore architects to design scalable, secure Azure cloud architectures using using Azure services like Azure Virtual Machines, Azure App Services, and Azure Kubernetes Service Ensure best practices for security, compliance, and performance optimization. Usage IaC tools like Terraform and AWS CloudFormation to automate cloud resource management. Work with DevOps teams, developers, and stakeholders to meet all cloud infrastructure requirements. Measure cloud environments to optimize performance and cost-efficiency. Create detailed documentation of cloud architectures and processes for maintenance and future ? Do Provide adequate support in architecture planning, migration & installation for new projects in own tower (platform/dbase/ middleware/ backup) Lead the structural/ architectural design of a platform/ middleware/ database/ back up etc. according to various system requirements to ensure a highly scalable and extensible solution Conduct technology capacity planning by reviewing the current and future requirements Utilize and leverage the new features of all underlying technologies to ensure smooth functioning of the installed databases and applications/ platforms, as applicable Strategize & implement disaster recovery plans and create and implement backup and recovery plans Manage the day-to-day operations of the tower Manage day-to-day operations by troubleshooting any issues, conducting root cause analysis (RCA) and developing fixes to avoid similar issues. Plan for and manage upgradations, migration, maintenance, backup, installation and configuration functions for own tower Review the technical performance of own tower and deploy ways to improve efficiency, fine tune performance and reduce performance challenges Develop shift roster for the team to ensure no disruption in the tower Create and update SOPs, Data Responsibility Matrices, operations manuals, daily test plans, data architecture guidance etc. Provide weekly status reports to the client leadership team, internal stakeholders on database activities w.r.t. progress, updates, status, and next steps Leverage technology to develop Service Improvement Plan (SIP) through automation and other initiatives for higher efficiency and effectiveness ? Team Management Resourcing Forecast talent requirements as per the current and future business needs Hire adequate and right resources for the team Train direct reportees to make right recruitment and selection decisions Talent Management Ensure 100% compliance to Wipro’s standards of adequate onboarding and training for team members to enhance capability & effectiveness Build an internal talent pool of HiPos and ensure their career progression within the organization Promote diversity in leadership positions Performance Management Set goals for direct reportees, conduct timely performance reviews and appraisals, and give constructive feedback to direct reports. Ensure that organizational programs like Performance Nxt are well understood and that the team is taking the opportunities presented by such programs to their and their levels below Employee Satisfaction and Engagement Lead and drive engagement initiatives for the team Track team satisfaction scores and identify initiatives to build engagement within the team Proactively challenge the team with larger and enriching projects/ initiatives for the organization or team Exercise employee recognition and appreciation ? Deliver NoPerformance ParameterMeasure1Operations of the towerSLA adherence Knowledge management CSAT/ Customer Experience Identification of risk issues and mitigation plans Knowledge management2New projectsTimely delivery Avoid unauthorised changes No formal escalations ? Mandatory Skills: Cloud AWS Admin. Experience5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 2 weeks ago
7.0 - 11.0 years
35 - 45 Lacs
Pune
Hybrid
We are looking for a highly motivated Senior DevOps/Mlops engineer specializing in AWS and Kubernetes to join our team Required Candidate profile Exp. in AI OPERATIONS, MLops & Python (MUST). Exp. with deploying secure infrastructure & services in one or more cloud environments such as AWS (must), Azure or GCP Exp. in Kubernetes
Posted 2 weeks ago
3.0 - 6.0 years
5 - 15 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Work from Office
Primary skills: Cloud->Amazon Webservices DevOps, Devops, Terraform, Cloud Platform->Amazon Webservices Architecture Preferred Skills: Technology->Cloud Platform->AWS Core services Technology->Cloud Platform->Amazon Webservices Architecture Technology->Cloud Platform->Amazon Webservices DevOps Technology->Container Platform->Kubernetes Technology->Cloud Platform->AWS Container services Educational Requirements Master Of Engineering,Master Of Science,Master Of Technology,Bachelor Of Comp. Applications,Bachelor of Engineering,Bachelor Of Technology Location : Pune ( Willing to relocate ) Experience : 5-15 Years
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane