Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
We are seeking a Capacity Planning Engineer with over 7 years of experience to join a client team in Pune. As a Capacity Planning Engineer, you will play a vital role in optimizing system performance in cloud-based environments through strategic planning and testing. Your responsibilities will include designing and implementing capacity planning strategies in AWS, with a focus on EKS, EC2, DynamoDB, Lambda, and other services. You will be conducting continuous load testing to identify performance bottlenecks, analyzing system metrics, and forecasting future capacity needs. Collaboration with cross-functional teams to ensure business-aligned application performance is essential, along with developing automated testing frameworks to simulate real-world traffic. Key Skills required for this role include a strong hands-on experience in the AWS ecosystem, familiarity with Dynatrace or similar monitoring tools, proficiency in scripting (Python, Bash) for automation, knowledge of load testing tools such as JMeter or Gatling, excellent analytical and communication skills, and a solid understanding of container orchestration and microservices. The ideal candidate should hold a Bachelors/Masters degree in Computer Science, Engineering, or a related field, along with a minimum of 8 years of relevant experience. If you are passionate about cloud performance optimization and enjoy solving infrastructure challenges, we encourage you to apply by sending your CV to Awanish@optimizze.in. For further inquiries, please contact us at +91-8318739782.,
Posted 20 hours ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Cloud Platform Engineer, you will play a crucial role in developing and maintaining Terraform modules and patterns for AWS and Azure. Your responsibilities will include creating platform landing zones, application landing zones, and deploying application infrastructure. Managing the lifecycle of these patterns will be a key aspect of your role, encompassing tasks such as releases, bug fixes, feature integrations, and updating test cases. You will be responsible for developing and releasing Terraform modules, landing zones, and patterns for both AWS and Azure platforms. Providing ongoing support for these patterns, including bug fixing and maintenance, will be essential. Additionally, you will need to integrate new features into existing patterns to enhance their functionality and ensure that updated and new patterns meet the current requirements. Updating and maintaining test cases for patterns will also be part of your responsibilities to guarantee reliability and performance. To qualify for this role, you should have at least 5 years of experience in AWS and Azure cloud migration. Proficiency in Cloud compute (such as EC2, EKS, Azure VM, AKS) and Storage (like s3, EBS, EFS, Azure Blob, Azure Managed Disks, Azure Files) is required. A strong knowledge of AWS and Azure cloud services, along with expertise in Terraform, is essential. Possessing AWS or Azure certification would be advantageous for this position. Key Qualifications: - 5+ years of AWS/Azure cloud migration experience - Proficiency in Cloud compute and Storage - Strong knowledge of AWS and Azure cloud services - Expertise in Terraform - AWS/Azure certification preferred Mandatory Skills: Cloud AWS DevOps (Minimum 5 Years of Migration Experience) Relevant Experience: 5-8 Years This is a Full-time, Permanent, or Contractual / Temporary job with a contract length of 12 months. Benefits: - Health insurance - Provident Fund Schedule: - Day shift, Monday to Friday, Morning shift Additional Information: - Performance bonus - Yearly bonus,
Posted 21 hours ago
5.0 - 9.0 years
0 Lacs
kolkata, west bengal
On-site
You are a passionate and customer-focused AWS Solutions Architect seeking to join Workmates, the fastest-growing partner to the world's major cloud provider, AWS. In this role, you will play a crucial part in driving innovation, creating differentiated solutions, and shaping new customer experiences. Collaborating with industry specialists and technology experts, you will help customers maximize the benefits of AWS in their cloud journey. By choosing Workmates and the AWS Practice, you will elevate your AWS expertise to new heights in an innovative and collaborative setting. Embrace the opportunity to lead the way in native cloud transformation with the leading partner in AWS growth worldwide. At Workmates, we value our people as our greatest assets and are committed to fostering a culture of excellence in cloud-native operations. Join us in our mission to drive innovation across Cloud Management, Media, DevOps, Automation, IoT, Security, and more. Be part of a team where independence and ownership are encouraged, allowing you to thrive authentically. Role Description: - Build and manage cloud infrastructure environments - Ensure availability, performance, security, and scalability of production systems - Collaborate with application teams to implement DevOps practices throughout the development lifecycle - Ability to develop solution prototypes and conduct proof of concepts for new tools - Design automated, repeatable, and scalable processes to enhance efficiency and software quality, including managing Infrastructure as Code and developing internal tooling to simplify workflows - Automate and streamline operations and processes - Troubleshoot and diagnose issues/outages, providing operational support - Engage in incident handling, promoting a culture of post-mortem analysis and knowledge sharing Requirements: - Minimum of 5 years of hands-on experience in building and supporting large-scale environments - Strong background in Architecting and Implementing AWS Cloud solutions - Proficiency in AWS CloudFormation and Terraform - Experience with Docker Containers, container environment build and deployment - Proficient in Kubernetes and EKS - Sysadmin and infrastructure expertise (Linux internals, filesystems, networking) - Skilled in scripting, particularly Bash scripting - Experience with code check-in, peer review, and collaboration within distributed teams - Hands-on experience in CI/CD pipeline setup and release - Strong familiarity with CI/CD tools such as Jenkins, GitLab, or TravisCI - Proficient in AWS Developer tools like AWS Code Pipeline, Code Build, Code Deploy, AWS Lambda, AWS Step Function, etc. - Experience with log management solutions (ELK/EFK or similar) - Proficiency in Configuration Management tools like Ansible or similar - Expertise in modern Monitoring and Alerting tools (CloudWatch, Prometheus, Grafana, Opsgenie, etc.) - Passion for automating tasks and troubleshooting production issues - Experience in automation testing, script generation, and integration with CI/CD - Skilled in AWS Security (IAM, Security Groups, KMS, etc.) - Must have CKA/CKAD/CKS Certifications and knowledge of Python/Go/Bash Good to have: - AWS Professional Certifications - Experience with Service Mesh and Distributed tracing - Knowledge of Scrum/Agile methodology Choose Workmates to advance your career and be part of a team dedicated to delivering innovative solutions in a dynamic and supportive environment. Join us in shaping the future of cloud technology and making a meaningful impact on the industry.,
Posted 21 hours ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
As a Cloud DevOps Engineer at Amdocs, your role will involve design, development, modification, debugging, and maintenance of software systems. You will be responsible for various tasks such as monitoring, triaging, root cause analysis, and reporting production incidents. Additionally, you will investigate issues reported by clients, manage Reportdb servers, and handle access management for new users on Reportdb. Collaborating with the stability team to enhance Watch tower alerts, working with cronjobs and scripts for dumping and restoring from ProdDb, and performing non-prod deployments in Azure DevOps will also be part of your responsibilities. Creating Kibana dashboards will be another key aspect of your job. To excel in this role, you must possess technical skills such as experience in AWS DevOps, EKS, EMR, strong knowledge of Docker and Dockerhub, proficiency in Terraform and Ansible, and good exposure to Git and Bitbucket. Knowledge and experience in Kubernetes, Docker, and cloud experience working with VMS and Azure storage will be beneficial. Sound data engineering experience is also preferred. In addition to technical skills, you are expected to have strong problem-solving abilities, effective communication with clients and operational managers, and the capacity to build and maintain good relationships with colleagues. Being adaptable, able to prioritize tasks, work under pressure, and meet deadlines are essential. Anticipating problems, demonstrating an innovative approach, and possessing good presentation skills are qualities that will contribute to your success in this role. A willingness to work in shifts and extended hours is required. In this position, you will have the opportunity to design and develop new software applications and work in a dynamic environment that offers personal growth opportunities. If you are looking for a challenging role where you can contribute to innovative projects and be part of a growing organization, this job is perfect for you.,
Posted 22 hours ago
7.0 - 15.0 years
0 Lacs
noida, uttar pradesh
On-site
You are looking for a Product Engineering Leader with experience in building scalable B2B/B2E products. The ideal candidate should possess the following experience and skills: - Experience in developing data-driven and workflow-based products across multiple clouds (AWS/Azure/Google). - Proven track record of leading engineering teams to develop Enterprise-grade products that can scale on demand and prioritize security. - Ability to conceptualize products, architect/design, and swiftly deliver them to customers. - Passion for building highly scalable and performant products. - Apply creative thinking to resolve technical solutions that align with business goals and Product NFR goals. - Present the vision and value of proposed architectures and solutions to various audiences in alignment with business priorities. - Conduct technology-centric discussions with customers, understand their requirements, manage expectations, and provide tailored solutions. - Collaborate effectively with key partners, including Product Managers & Owners, business, and engineering teams to define solutions for complex requirements. - Any experience in Life Sciences/Commercial/Incentive Compensation domains is a plus. - Proficiency in building data-centric products dealing with large datasets. In terms of behavioral skills, the ideal candidate should possess: - Product Mindset with experience in agile methodology-based product development. - Task Management skills with experience in team management and prioritization. - Strong communication skills to convey ideas and information clearly and accurately in both written and verbal forms. The desired education and experience include: - 15+ years of IT experience. - 7+ years of experience in product development and core engineering. - Bachelor's/master's degree in Computer Science from Tier 1-2 college. Technology exposures should include: - React JS, Python, PySpark, Snowflake, Postgres, AWS/Azure/Google Cloud, Docker, EKS. - Exposure to AI/GenAI technologies is a plus. - Engineering experience in Java/JEE/.Net Technologies is also acceptable if accompanied by a strong product engineering background.,
Posted 22 hours ago
6.0 - 9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Position Description Job Title: AWS Cloud with Python Position: Lead Analyst/SSE Experience: 6-9 Years Category: Software Development/ Engineering Main location: Bangalore Employment Type: Full Time Job Description: API Development: Knowledgeable in API development, lifecycle management, and gateways like Envoy. Strong understanding in API testing tools Cloud Expertise: Proficient in AWS and its various services such as EKS, S3, DynamoDB, EC2, Route 53, Lambda, etc. Ability to automate with various scripting languages (Python, Shell scripting, GO) Understanding of infrastructure as code tools (IAM, ARM, Terraform, Chef, ) Solid understanding of Cloud Computing and DevOps concepts including CI/CD pipelines Hands-on Kubernetes skills and knowledge. Understanding of Kubernetes cluster rehydration Hands on experience with one or more observability tools (Prometheus, Grafana, ELK/OpenSearch, Open Telemetry, Datadog, etc) Experienced in Instrumentation with systems skills on building and operating, monitoring, logging, alerting services of distributed systems at scale Proven experience in implementing advanced observability practices and techniques at scale. Proven experience in maintaining scalability and resiliency of complex environment. Ability to triage, execute root cause analysis, and be decisive under pressure Experience managing and interpreting large datasets using query languages and visualization tools Proficient communication skills with an ability to reach both technical and non-technical audience Ability to learn new software, method and practices and bringing them to our developers Ability to work with a variety of individuals and groups, both in person and virtually, in a constructive and collaborative manner and build and maintain effective relationships Proven experience performing chaos testing to build confidence in the system&aposs capability to withstand turbulent conditions in production On call support and experience Understanding of Agile Methodology Behavioral : Analytical Skills and Research capabilities Ability to evaluate and propose best-of-breed tools and engineering best-practices Deeply self-motivated with the ability to work independently, coordinating activities within cross-regional and multi-functional teams A passion for excellence, innovation, and teamwork; eager to learn and adapt every day Proven track record to quickly learn, adapt and thrive in a fast paced, dynamic and deadline driven environment Excellent Communication Skills Ability to work with a variety of individuals and groups, both in person and virtually, in a constructive and collaborative manner and build and maintain effective relationships Proven experience performing chaos testing to build confidence in the system&aposs capability to withstand turbulent conditions in production On call support and experience Understanding of Agile Methodology Behavioral : Analytical Skills and Research capabilities Ability to evaluate and propose best-of-breed tools and engineering best-practices Deeply self-motivated with the ability to work independently, coordinating activities within cross-regional and multi-functional teams A passion for excellence, innovation, and teamwork; eager to learn and adapt every day Proven track record to quickly learn, adapt and thrive in a fast paced, dynamic and deadline driven environment Excellent Communication Skills Note: This job description is a general outline of the responsibilities and qualifications typically associated with the Virtualization Specialist role. Actual duties and qualifications may vary based on the specific needs of the organization. CGI is an equal opportunity employer. In addition, CGI is committed to providing accommodations for people with disabilities in accordance with provincial legislation. Please let us know if you require a reasonable accommodation due to a disability during any aspect of the recruitment process and we will work with you to address your needs. Your future duties and responsibilities Required Qualifications To Be Successful In This Role Your future duties and responsibilities Required Qualifications To Be Successful In This Role Together, as owners, lets turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, youll reach your full potential because You are invited to be an owner from day 1 as we work together to bring our Dream to life. Thats why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our companys strategy and direction. Your work creates value. Youll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. Youll shape your career by joining a company built to grow and last. Youll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our teamone of the largest IT and business consulting services firms in the world. Show more Show less
Posted 22 hours ago
15.0 - 17.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Looking for a Product Engineering Leader who has experience in building scalable B2B /B2E Products with the following experience/skills: Should have experience in developing data driven and workflow-based products spanning multiple clouds (AWS/Azure/Google). Should have led engineering teams to develop Enterprise-grade products which can scale on demand and are secure. Ability to conceptualize products, architect /design and take them to customers in a short span of time. Should have a passion for building highly scalable and performant products. Apply creative thinking & approach to resolve technical solutions that further business goals and align with Product NFR goals like performance, reliability, scalability, usability, security, flexibility, and cost. Conceptualize and present the vision & value of proposed architectures and solutions to a wide range of audiences in alignment with business priorities and objectives. An ability to have technology centric discussions with customers, understand their requirements, manage expectations and provide solutions for their business needs. Collaborate efficiently with key partners including Product managers & owners, business and engineering teams to identify and define solutions for complex business and technical requirements. Any experience in Life sciences /Commercial/Incentive Compensation domain is an added plus. Experience in building data centric (dealing with large data sets) products. Behavioral Skills: Product Mindset - Experience in agile methodology-based product development. Able to define incremental development paths for functionality to achieve future vision. Task Management Have team management experience and should be able to plan tasks, discuss and work on priorities. Communication Able to convey ideas and information clearly and accurately to self or others whether in writing or verbal. Education & Experience: 15+ Years of experience working in IT 7+ years of experience in product development and core engineering Bachelors/masters degree in computer science from Tier 1-2 college. Technology Exposures: React JS, Python, PySpark, Snowflake, Postgres, AWS/Azure/Google Cloud, Docker, EKS. Exposure around AI/GenAI Technologies an added plus. Apart from above, even if he has good engineering experience in Java/JEE/.Net Technologies, that will also work as long as candidate has good product engineering background. Show more Show less
Posted 23 hours ago
5.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Overview FTSE Russell, part of the London Stock Exchange Group, is an essential index partner for a changing world, providing category defining indices across asset classes and investment objectives to create new possibilities for the global investment community. FTSE Russells expertise and products are used extensively by institutional and retail investors globally. Job Summary Are you passionate about making developers' lives easier, faster, and more fulfilling We&aposre on a mission to supercharge our engineering culture by building a world-class DevOps organization that&aposs the envy of the industry. This is a great opportunity to join a growing company in an innovative and dynamic industry ! We are looking for a strong, hands-on DevOps Engineer to build and maintain a brand-new cloud-based Index platform. You will use your deep experience of DevOps CI/CD practices to design, implement and optimize a best-in-class AWS micro services architecture. You will build and manage secure, reliable, and cost-effective cloud environments, drive observability and resilience through chaos and disaster recovery testing, and implement FinOps strategies to optimize cloud spending. Your expertise will be crucial in ensuring operational excellence and business continuity in a dynamic cloud-native environment. An ideal candidate for this role would be someone having a strong background in DevOps practices and tools, with a focus on GitLab, Terraform, AWS cloud, Kubernetes/EKS, Observability. You will play a crucial role in our technology team, contributing to the development, deployment, and maintenance of our infrastructure. Key Responsibilities Design, implement, and manage scalable, secure, and highly available AWS cloud infrastructure using Terraform as IaC. Build, maintain, and enhance CI/CD pipelines to automate software delivery processes for Java and Python based Microservices-style architectures. Collaborate with software developers across multiple geographies and cross-functional teams to understand project objectives, gather requirements, and deliver systems and software within agreed upon timelines. Manage and optimize container orchestration environments with Amazon EKS Implement cloud security practices, including IAM management, encryption, vulnerability assessments, and compliance monitoring. Develop and maintain observability frameworks using Datadog, Prometheus, or equivalent monitoring and logging tools. Lead chaos engineering practices to proactively identify and mitigate potential system failures. Plan and implement disaster recovery testing to ensure business continuity and rapid failover. Implement Cloud FinOps practices to monitor and optimize cloud spend. Solve issues and lead root cause analysis with clear documentation and resolution plans. Monitor and improve system health and performance to minimize downtime and increase end-user satisfaction. Mentor junior engineers and promote a culture of continuous improvement and cloud operational excellence. Stay updated on industry standard methodologies and emerging technologies in DevOps and Cloud. Must Have Skills Strong hands-on experience with AWS cloud services: EC2, S3, RDS, Lambda, VPC, IAM, CloudWatch, EKS, ECS Expertise in provisioning of Infrastructure on AWS using Terraform. Proven experience designing and managing CI/CD pipelines with tools such as GitLab / Jenkins (Preferably GitLab) for Java and Python based applications Strong scripting skills in Python, Bash, Ansible or similar language. (Bash and Python preferred) Solid knowledge of container orchestration platforms like Docker / Kubernetes in general and specifically Amazon EKS and ECS. Knowledge of Cloud security principles including IAM policies, encryption, vulnerability scanning, and compliance audits. Proficient in Networking aspects of AWS focused on building a robust, secure and efficient cloud network architecture Proficient in observability tools like Datadog, Prometheus, Grafana, or equivalents. Experience in Planning, Execution, and Automation of Disaster Recovery (DR) testing and failover procedures Understanding and application of Cloud FinOps principles for cost management and optimization through monitoring, budgeting, and employing strategic FinOps practices Experience with automated testing frameworks and infrastructure testing tools. Excellent problem-solving skills and ability to work independently and collaboratively. Excellent communication, problem-solving, and collaboration skills. Familiarity with Agile Principles Nice To Have Skills AWS Certifications such as AWS Certified Solutions Architect, Certified DevOps Engineer or equivalent. Practical experience with chaos engineering methodologies and tools. Experience in the Financial Services Domain Bonus Skills Experience working in areas of Equity or Fixed Income and a working knowledge of Benchmarks and Indices. Preferred Qualifications Bachelors degree in Computer Science, Engineering, or related field, or equivalent experience. 5+ years of hands-on DevOps experience with strong expertise in AWS. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership , Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyones race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what its used for, and how its obtained, your rights and how to contact us as a data subject. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership , Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyones race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what its used for, and how its obtained, your rights and how to contact us as a data subject. If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice. Show more Show less
Posted 23 hours ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Data Engineer (AWS Devops) Who We Are: Headquartered in New York City, Take-Two Interactive Software, Inc. is a leading developer, publisher, and marketer of interactive entertainment for consumers around the globe. The Company develops and publishes products principally through Rockstar Games, 2K, Private Division, and Zynga. Our products are currently designed for console gaming systems, PC, and Mobile, including smartphones and tablets, and are delivered through physical retail, digital download, online platforms, and cloud streaming services. The Companys common stock is publicly traded on NASDAQ under the symbol TTWO. While our offices (physical and virtual) are casual and inviting, we are deeply committed to our core tenets of creativity, innovation and efficiency, and individual and team development opportunities. Our industry and business are continually evolving and fast-paced, providing numerous opportunities to learn and hone your skills. We work hard, but we also like to have fun, and believe that we provide a great place to come to work each day to pursue your passions. The Challenge Take-Two, a leader in the interactive entertainment industry, is looking for a seasoned DevOps Engineer to join a team building a cloud-based data and analytics platform and an integration data hub to address challenges of point-to-point integrations. The ideal candidate has technical depth and hands-on implementation experience of various practices and tools in the DevOps toolchain. He/she must have previous experience in a Cloud DevOps role. We are looking for an individual who is comfortable rolling up their sleeves to design and code modules for infrastructure, application, and processes. What Youll Take On Build Cloud Infrastructure components by leveraging Infrastructure as code. Collaborate with product/project development teams to drive automation of Configuration Management, Build, Release, Deployment and Monitoring processes. Provide professional support for the developed automations, respond to incidents to proactively prevent system outages, and ensure environments availability to meet SLAs. Contribute to innovation through automation to enable standard deployable units of infrastructure through multiple environments until production Participate in architectural discussions to ensure solutions are designed for successful deployment, security, cost effectiveness and high availability in the cloud. Ensure all infrastructure components meet proper performance and security standards. What You Bring 3+ years of hands-on experience building DevOps solution in a cloud environment, preferably AWS 2+ year experience in building CI/CD pipeline automation in tools (Jenkins preferred) with using scripting languages (Python, preferred) Hands on experience with AWS Services like VPC, EC2, S3, ELB, RDS, ECS/EKS, IAM, CloudFront, CloudWatch, SQS/SNS, Lambda is a must Experience working with Infrastructure as Code tooling is a must. Experience with Terraform is highly desirable. Strong Docker or Kubernetes skills highly desirable Knowledge of IP networking, VPN&aposs, DNS, load balancing and firewall is a must Experience working on Linux based infrastructure and Shell script is a must Experience with scheduling tool like Airflow is highly desirable Excellent written and verbal communication skills. What We Offer You: Great Company Culture. We pride ourselves as being one of the most creative and innovative places to work, creativity, innovation, efficiency, diversity and philanthropy are among the core tenets of our organization and are integral drivers of our continued success. Growth: As a global entertainment company, we pride ourselves on creating environments where employees are encouraged to be themselves, inquisitive, collaborative and to grow within and around the company. Work Hard, Enjoy Life. Our employees bond, blow-off steam, and flex some creative muscles through our Office gaming spaces, company parties, game release events, monthly socials, and team challenges. Benefits. Benefits include, but are not limited to; Discretionary bonus, Provident fund contributions, 1+5 medical insurance + top up options and access to Practo online Doctor consultation App, Employee assistance program, 3X CTC Life Assurance, 3X CTC Personal accident insurance, childcare services, 20 days holiday + statutory holidays, Perks. Gym reimbursement up to INR1150 per month, charitable giving program, access to learning platforms, gaming events. Please be aware that Take-Two does not conduct job interviews or make job offers over third-party messaging apps such as Telegram, WhatsApp, or others. Take-Two also does not engage in any financial exchanges during the recruitment or onboarding process, and the Company will never ask a candidate for their personal or financial information over an app or other unofficial chat channel. Any attempt to do so may be the result of a scam or phishing exercise. Take-Twos in-house recruitment team will only contact individuals through their official Company email addresses (i.e., via a take2games.com email domain). If you need to report an issue or otherwise have questions, please contact [HIDDEN TEXT].* As an equal opportunity employer, Take-Two Interactive Software, Inc. (Take-Two) is committed to fostering and celebrating the diverse thoughts, cultures, and backgrounds of its talent, partners, and communities throughout its organization. Consistent with this commitment, Take-Two does not discriminate or retaliate against any employee or job applicant because of their race, color, religion, sex (including pregnancy, sexual orientation, and gender identity), national origin, age, disability, and genetic information (including family medical history), or on the basis of any other trait protected by applicable law. If you need to report a concern or have questions regarding Take-Twos equal opportunity commitment, please contact [HIDDEN TEXT]. Show more Show less
Posted 23 hours ago
7.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Were looking for a Cloud Architect / Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning, application/API hosting, and enabling data and GenAI workloads through a modern, secure cloud environment. Key Responsibilities ? Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs. ? Develop and manage CI/CD pipelines using Jenkins, AWS CodePipeline, CodeBuild, or GitHub Actions. ? Deploy and host internal tools, APIs, and applications using ECS, EKS, Lambda, API Gateway, and ELB. ? Provision and support analytics and data platforms using S3, Glue, Redshift, Athena, Lake Formation, and orchestration tools like Step Functions or Apache Airflow (MWAA). ? Implement cloud security, networking, and compliance using IAM, VPC, KMS, CloudWatch, CloudTrail, and AWS Config. ? Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements. ? Support GenAI infrastructure, including Amazon Bedrock, SageMaker, or integrations with APIs like OpenAI. Requirements ? 7-10 years of experience in cloud engineering, DevOps, or cloud architecture roles. ? Strong hands-on expertise with the AWS ecosystem and tools listed above. ? Proficiency in scripting (e.g., Python, Bash) and infrastructure automation. ? Experience deploying containerized workloads using Docker, ECS, EKS, or Fargate. ? Familiarity with data engineering and GenAI workflows is a plus. ? AWS certifications (e.g., Solutions Architect, DevOps Engineer) are preferred. Show more Show less
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As an AWS Operations and DevOps Support specialist, you will be responsible for providing 24/7 monitoring, incident management with SLAs, cost optimization, and governance services. Your primary focus will be on adhering to security and compliance best practices, as well as automating provisioning and workflows. Your key deliverables will include generating monthly reports on usage, cost, and incidents, maintaining cloud architecture documentation, conducting security posture and compliance assessments, and ensuring the smooth operation of DevOps pipelines. To excel in this role, you should ideally have experience working with container platforms such as ECS & EKS, handling ML workloads, possess relevant certifications, and be familiar with HPC and ML technologies like parallel clusters, AWS batch, and SageMaker especially in the context of life sciences data workloads. While AWS Advanced or Premier Partner status is preferred, the right candidate will also have a strong foundation in the aforementioned areas along with a track record of references to support their expertise. This is a full-time, permanent position with benefits such as health insurance and provident fund. The role entails working day and evening shifts from Monday to Friday.,
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Solution Architect in the Pre-Sales department, with 4-6 years of experience in cloud infrastructure deployment, migration, and managed services, your primary responsibility will be to design AWS Cloud Professional Services and AWS Cloud Managed Services solutions tailored to meet customer needs and requirements. You will engage with customers to analyze their requirements, ensuring cost-effective and technically sound solutions are provided. Your role will also involve developing technical and commercial proposals in response to various client inquiries such as Requests for Information (RFI), Requests for Quotation (RFQ), and Requests for Proposal (RFP). Additionally, you will prepare and deliver technical presentations to clients, highlighting the value and capabilities of AWS solutions. Collaborating closely with the sales team, you will work towards supporting their objectives and closing deals that align with business needs. Your ability to apply creative and analytical problem-solving skills to address complex challenges using AWS technology will be crucial. Furthermore, you should possess hands-on experience in planning, designing, and implementing AWS IaaS, PaaS, and SaaS services. Experience in executing end-to-end cloud migrations to AWS, including discovery, assessment, and implementation, is required. You must also be proficient in designing and deploying well-architected landing zones and disaster recovery environments on AWS. Excellent communication skills, both written and verbal, are essential for effectively articulating solutions to technical and non-technical stakeholders. Your organizational, time management, problem-solving, and analytical skills will play a vital role in driving consistent business performance and exceeding targets. Desirable skills include intermediate-level experience with AWS services like AppStream, Elastic BeanStalk, ECS, Elasticache, and more, as well as IT orchestration and automation tools such as Ansible, Puppet, and Chef. Knowledge of Terraform, Azure DevOps, and AWS development services will be advantageous. In this role based in Noida, Uttar Pradesh, India, you will have the opportunity to collaborate with technical and non-technical teams across the organization, ensuring scalable, efficient, and secure solutions are delivered on the AWS platform.,
Posted 1 day ago
12.0 - 16.0 years
0 Lacs
karnataka
On-site
Join the Agentforce team in AI Cloud at Salesforce and make a real impact with your software designs and code! This position requires technical skills, outstanding analytical and influencing skills, and extraordinary business insight. It is a multi-functional role that requires building alignment and communication with several engineering organizations. We work in a highly collaborative environment, and you will partner with a highly cross-functional team comprised of Data Scientists, Software Engineers, Machine learning engineers, UX experts, and product managers to build upon Agentforce, our cutting edge new AI framework. We value execution, clear communication, feedback and making learning fun. Your impact - You will: Architect, design, implement, test, and deliver highly scalable AI solutions: Agents, AI Copilots/assistants, Chatbots, AI Planners, RAG solutions. Be accountable for defining and driving software architecture and enterprise capabilities (scalability, fault tolerance, extensibility, maintainability, etc.) Independently design sophisticated software systems for high-end solutions, while working in a consultative fashion with other senior engineers and architects in AI Cloud and across the company Determine overall architectural principles, frameworks, and standards to craft vision and roadmaps Analyze and provide feedback on product strategy and technical feasibility Drive long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Actively communicate with, encourage and motivate all levels of staff. Be a subject matter expert for multiple products, while writing code and working closely with other developers, PM, and UX to ensure features are delivered to meet business and quality requirements Troubleshoot complex production issues and interface with support and customers as needed Drives long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Required Skills: 12+ years of experience in building highly scalable Software-as-a-Service applications/platform Experience building technical architectures that address complex performance issues Thrive in dynamic environments, working on cutting edge projects that often come with ambiguity. Innovation/startup mindset to be able to adapt Deep knowledge of object-oriented programming and experience with at least one object-oriented programming language, preferably Java Proven ability to mentor team members to support their understanding and growth of software engineering architecture concepts and aid in their technical development High proficiency in at least one high-level programming language and web framework (NodeJS, Express, Hapi, etc.) Proven understanding of web technologies, such as JavaScript, CSS, HTML5, XML, JavaScript, JSON, and/or Ajax Data model design, database technologies (RDBMS & NoSQL), and languages such as SQL and PL/SQL Experience delivering or partnering with teams that ship AI products at high scale. Experience in automated testing including unit and functional testing using Java, JUnit, JSUnit, Selenium Demonstrated ability to drive long-term design strategies that span multiple complex projects Experience delivering technical reports and presentations to customers and at industry events Demonstrated track record of cultivating strong working relationships and driving collaboration across multiple technical and business teams to resolve critical issues Experience with the full software lifecycle in highly agile and ambiguous environments Excellent interpersonal and communication skills. Preferred Skills: Solid experience in API development, API lifecycle management, and/or client SDKs development Experience with machine learning or cloud technology platforms like AWS SageMaker, Terraform, Spinnaker, EKS, GKE Experience with AI/ML and Data science, including Predictive and Generative AI Experience with data engineering, data pipelines or distributed systems Experience with continuous integration (CI) and continuous deployment (CD), and service ownership Familiarity with Salesforce APIs and technologies Ability to support/resolve production customer escalations with excellent debugging and problem-solving skills,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Software Engineer (Cloud Development) at our company, you will have the opportunity to be a key part of the Cloud development group in Bangalore. We are seeking passionate individuals who are enthusiastic problem solvers and experienced Cloud engineers to help us build and maintain Synamedia GO product and Infinite suite of solutions. Your role will involve designing, developing, and deploying solutions using your deep-rooted programming and system experience for the next generation of products in the domain of Video streaming. Your key responsibilities will include conducting technology evaluations, developing proof of concepts, designing Cloud distributed microservices features, writing code, conducting code reviews, continuous integration, continuous deployment, and automated testing. You will work as part of a development team responsible for building and managing microservices for the platform. Additionally, you will play a critical role in the design and development of services, overseeing the work of junior team members, collaborating in a multi-site team environment, and ensuring the success of your team by delivering high-quality results in a timely manner. To be successful in this role, you should have a strong technical background with experience in cloud design, development, deployment, and high-scale systems. You should be proficient in loosely coupled design, Microservices development, Message queues, and containerized applications deployment. Hands-on experience with technologies such as NodeJS, Java, GoLang, and Cloud Technologies like AWS, EKS, Open stack is required. You should also have experience in DevOps, CI/CD pipeline, monitoring tools, and database technologies. We are looking for highly motivated individuals who are self-starters, independent, have excellent analytical and logical skills, and possess strong communication abilities. You should have a Test-Driven Development (TDD) mindset, be open to supporting incidents on Production deployments, and be willing to work outside of regular business hours when necessary. At our company, we value diversity, inclusivity, and equal opportunity. We offer flexible working arrangements, skill enhancement and growth opportunities, health and wellbeing programs, and the chance to work collaboratively with a global team. We are committed to fostering a people-friendly environment, where all our colleagues can thrive and succeed. If you are someone who is eager to learn, ask challenging questions, and contribute to the transformation of the future of video, we welcome you to join our team. We offer a culture of belonging, where innovation is encouraged, and we work together to achieve success. If you are interested in this role or have any questions, please reach out to our recruitment team for assistance.,
Posted 2 days ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Cloud Engineer at JLLT CoE, you will be responsible for designing and implementing cloud infrastructure across Azure, AWS, and GCP environments. Your role will involve architecting, deploying, and optimizing Azure-based solutions, including compute, storage, networking, and security services. You will lead cloud migration and modernization initiatives with a focus on Azure technologies and maintain infrastructure as code using modern DevOps practices. Additionally, you will manage containerized workloads using AKS and EKS, establish cloud security standards, build CI/CD pipelines, provide mentorship to junior engineers, optimize cloud resource usage, and troubleshoot complex issues in cloud environments. To excel in this role, you should have a minimum of 8 years of experience in cloud engineering/architecture roles, with strong hands-on experience in Azure and working knowledge of AWS and GCP platforms. Proficiency in Kubernetes orchestration, particularly AKS, along with experience in Karpenter, ArgoCD, and Istio in container environments is essential. You should also demonstrate expertise in cloud security principles, cloud networking concepts, and have experience with GitHub Actions for CI/CD workflows. Familiarity with infrastructure as code tools like Terraform, Azure ARM templates, and AWS CloudFormations is required, along with excellent communication and collaboration skills. Preferred qualifications include Azure certifications, experience with Azure governance and compliance frameworks, knowledge of hybrid cloud architectures, background in supporting enterprise-scale applications, and experience with Azure monitoring and observability tools. We are seeking a proactive cloud professional who excels in Azure environments, maintains multi-cloud expertise, can work independently on complex problems, and deliver robust, secure cloud solutions. If you are passionate about cloud technologies and looking to join a dynamic team, apply today to be a part of our exciting journey at JLLT CoE.,
Posted 2 days ago
9.0 - 13.0 years
0 Lacs
karnataka
On-site
The HybridCloud Managed Containers PO will be part of a team responsible for designing, building, and operating global AWS, Azure, and On-premises environments at Siemens Healthineers. You should have experience in Cloud and On-premises IT architecture, software implementation, automation, quality assurance, monitoring, and maintaining services with various dependencies. Collaboration with DevOps and SysOps teams is crucial to deliver highly available and scalable services. Working closely with business teams to understand requirements and translate them into performant cloud solutions is also a key aspect of this role. Prior experience in designing cloud and on-premises n-tier applications or IT infrastructure is required. As a HybridCloud Managed Containers PO, your responsibilities will include owning product end-to-end responsibility, defining product roadmap and capabilities, managing the product life cycle, ensuring product security, governance, and operations, providing product training and documentation, and interfacing with various teams and stakeholders. Desired qualifications for this position include having over 9 years of experience, in-depth knowledge of Kubernetes and Docker, hands-on experience with AKS, EKS, OpenShift Kubernetes distributions, and standalone Docker servers, as well as knowledge of containerizing applications, container image registries, DevOps, and integrating cloud resources with DevOps tools. Additionally, familiarity with non-functional requirements like patching, backup, monitoring, vulnerability management, and cost management of resources is important. The ideal candidate for this role should be highly self-motivated, able to communicate effectively with individuals at all levels, possess strong oral, written, and presentation skills, demonstrate strong business acumen, work well under pressure, excel in dynamic and fast-paced environments, and build solid relationships with team members and stakeholders. You should be technically innovative, have excellent communication and negotiation skills, and the ability to document complex concepts clearly. Encouraging open communication, taking initiative to solve technical problems, driving innovation, and striving for standardization and simplification in work processes are key competencies for this role. Soft skills requirements include leadership qualities, collaboration, customer orientation, intercultural sensitivity, value orientation, team development, multitasking abilities, initiative, efficient communication skills, quick learning capability, and a focus on delivery quality.,
Posted 2 days ago
5.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Bachelors/Masters degree in Computer Science, Information Technology or related field 5-7 years of experience in a DevOps role Strong understanding of the SDLC and experience with working on fully Agile teams Proven experience in coding & scripting DevOps, Ant/Maven, Groovy, Terraform, Shell Scripting, and Helm Chart skills. Working experience with IaC tools like Terraform, CloudFormation, or ARM templates Strong experience with cloud computing platforms (e.g. Oracle Cloud (OCI), AWS, Azure, Google Cloud) Experience with containerization technologies (e.g. Docker, Kubernetes/EKS/AKS) Experience with continuous integration and delivery tools (e.g. Jenkins, GitLab CI/CD) Kubernetes - Experience with managing Kubernetes clusters and using kubectl for managing helm chart deployments, ingress services, and troubleshooting pods. OS Services Basic Knowledge to Manage, configuring, and troubleshooting Linux operating system issues (Linux), storage (block and object), networking (VPCs, proxies, and CDNs) Monitoring and instrumentation - Implement metrics in Prometheus, Grafana, Elastic, log management and related systems, and Slack/PagerDuty/Sentry integrations Strong know-how of modern distributed version control systems (e.g. Git, GitHub, GitLab etc) Strong troubleshooting and problem-solving skills, and ability to work well under pressure Excellent communication and collaboration skills, and ability to lead and mentor junior team members Career Level - IC3 Responsibilities Design, implement, and maintain automated build, deployment, and testing systems Experience in Taking Application Code and Third Party Products and Building Fully Automated Pipelines for Java Applications to Build, Test and Deploy Complex Systems for delivery in Cloud. Ability to Containerize an Application i.e. creating Docker Containers and Pushing them to an Artifact Repository for deployment on containerization solutions with OKE (Oracle container Engine for Kubernetes) using Helm Charts. Lead efforts to optimize the build and deployment processes for high-volume, high-availability systems Monitor production systems to ensure high availability and performance, and proactively identify and resolve issues Support and Troubleshoot Cloud Deployment and Environment Issues Create and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD Continuously improve the scalability and security of our systems, and lead efforts to implement best practices Participate in the design and implementation of new features and applications, and provide guidance on best practices for deployment and operations Work with security team to ensure compliance with industry and company standards, and implement security measures to protect against threats Keep up-to-date with emerging trends and technologies in DevOps, and make recommendations for improvement Lead and mentor junior DevOps engineers and collaborate with cross-functional teams to ensure successful delivery of projects Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. As a member of the software engineering division, you will analyze and integrate external customer specifications. Specify, design and implement modest changes to existing software architecture. Build new products and development tools. Build and execute unit tests and unit test plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering to discuss major changes to functionality. Work is non-routine and very complex, involving the application of advanced technical/business skills in area of specialization. Leading contributor individually and as a team member, providing direction and mentoring to others. BS or MS degree or equivalent experience relevant to functional area. 6+ years of software engineering or related experience. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrows technology to tackle todays challenges. Weve partnered with industry-leaders in almost every sectorand continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. Thats why were committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. Were committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing [HIDDEN TEXT] or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
We are looking for a highly experienced and motivated Backend Solution Architect proficient in Node.js, with exposure to Python and modern cloud-native architectures. As the Backend Solution Architect, you will be responsible for designing and implementing robust, scalable, and secure backend systems. Your role will involve driving innovation with emerging technologies like AI/ML while leveraging deep expertise in AWS services, particularly EKS, ECS, and container orchestration. Your key responsibilities will include designing end-to-end backend architecture using Node.js (mandatory) and optionally Python. You will work with microservices and serverless frameworks to ensure scalability, maintainability, and security. Additionally, you will be tasked with architecting and managing AWS-based cloud solutions, integrating AI and ML components, designing containerized applications, setting up CI/CD pipelines, and optimizing database performance. As the ideal candidate, you should have at least 8 years of backend development experience, with a minimum of 4 years as a Solution/Technical Architect. Expertise in Node.js with frameworks like Express.js and NestJS is essential, along with strong experience in AWS services, microservices, event-driven architectures, and serverless computing. Proficiency in Docker, Kubernetes, CI/CD pipelines, authentication/authorization mechanisms, and API development is also required. Preferred qualifications include experience with AI/ML workflows, full-stack technologies like React, Next.js, or Angular, and hands-on AI/ML integration using platforms such as SageMaker or TensorFlow. Holding an AWS Solution Architect Certification or equivalent is a strong plus, along with knowledge of CDNs and high-performance, event-driven systems. At TechAhead, a global digital transformation company specializing in AI-first product design thinking, we are committed to driving digital innovation and delivering excellence. Join us to shape the future of digital innovation globally and make a significant impact with cutting-edge AI tools and strategies.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
bhubaneswar
On-site
You will be joining Logile's dynamic engineering team as a Mid-Level Java Developer. Your main responsibilities will include designing, developing, and maintaining scalable Java-based microservices using Spring Boot. You will also be developing and implementing RESTful APIs for seamless communication between microservices. Additionally, you will deploy and manage applications using containerization and orchestration technologies such as Docker and EKS to ensure high availability and performance. Collaboration with front-end developers to integrate APIs with Angular or React-based user interfaces is essential as you will be working on full-stack solutions that drive Logile's enterprise applications. Your role will involve participating in the full software development lifecycle, from gathering requirements to deployment and support. You will be expected to write clean, well-documented, and maintainable code following industry best practices. Debugging and resolving production issues to ensure system reliability and performance will also be part of your responsibilities. Implementing and contributing to CI/CD pipelines and DevOps practices for automated build, test, and deployment processes will be crucial. Close collaboration with cross-functional teams including QA, DevOps, and Product Management is required. This is an onsite job located at Logile's Bhubaneswar Office, with the expectation of availability during US working times. The ideal candidate should have at least 5 years of hands-on experience in Java development, proficiency in Spring Boot and RESTful API development, and a solid understanding of microservices architecture. Experience with front-end frameworks like Angular or React, Java 8+, Maven/Gradle, version control systems, SQL, and relational databases is necessary. Familiarity with containerization technologies, CI/CD tools, Agile/Scrum methodologies, and a bachelor's degree in computer science or related discipline are also required. Preferred skills include experience with message brokers, cloud platforms like AWS, Azure, or GCP, test automation tools, and knowledge of security and performance best practices.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
Genpact is a global professional services and solutions firm with a workforce of over 125,000 individuals in more than 30 countries. We are characterized by our innate curiosity, entrepreneurial agility, and commitment to creating enduring value for our clients. Our purpose, the relentless pursuit of a world that works better for people, drives us to serve and transform leading enterprises globally, including the Fortune Global 500. We leverage our profound business and industry knowledge, digital operations services, and expertise in data, technology, and AI to deliver impactful outcomes. We are currently seeking applications for the position of Senior Principal Consultant - QA Engineer! Responsibilities: - Develop comprehensive test plans, test cases, and test scenarios based on functional and non-functional requirements. - Manage the test case life cycle efficiently. - Execute and analyze manual and automated tests to identify defects and ensure the quality of software applications. - Collaborate closely with development teams to align test cases with development goals and timelines. - Work with cross-functional teams to ensure adequate testing coverage and effective communication of test results. Moreover, the ideal candidate should possess the ability to manage repeatable standard processes while also demonstrating proficiency in identifying and resolving ad-hoc issues. Qualifications we seek in you! Minimum Qualifications: - Proficiency in SQL, ETL Testing, and writing testing scripts in Python to validate functionality, create automation frameworks, and ensure the performance and reliability of data systems. - In-depth understanding of the data domain, encompassing data processing, storage, and retrieval. - Strong collaboration, communication, and analytical skills. - Experience in reviewing system requirements and tracking quality assurance metrics such as defect densities and open defect counts. - Experience in creating and enhancing the integration of CI/CD pipelines. - Familiarity with Agile/Scrum development processes. - Some exposure to performance and security testing. - Hands-on experience in test execution using AWS services, particularly proficient in services like MKS, EKS, Redshift, and S3. If you are passionate about quality assurance engineering and possess the required qualifications, we invite you to apply for this exciting opportunity! Job Details: - Job Title: Senior Principal Consultant - Location: India-Gurugram - Schedule: Full-time - Education Level: Bachelor's / Graduation / Equivalent - Job Posting Date: Sep 18, 2024, 4:28:53 AM - Unposting Date: Oct 18, 2024, 1:29:00 PM - Master Skills List: Digital - Job Category: Full Time,
Posted 3 days ago
7.0 - 11.0 years
0 Lacs
hyderabad, telangana
On-site
You should have a minimum of 7 years of experience working with MEAN Stack technologies. Your expertise should include a solid understanding of microservice architecture and AWS services such as ECS and EKS. You must also have a strong background in developing REST APIs using Node JS. As a key member of the team, you will be responsible for leading and managing the technical aspects of API framework development using NodeJS. Your ability to adapt quickly and thrive in a dynamic environment with evolving technologies is crucial. In addition to Node JS, you should have experience with Angular, Java Spring, and either SQL or MongoDB databases, as well as Docker containers. Familiarity with Apigee, Swagger, and Splunk is a plus. Proficiency in Node JS API development is a must-have for this role. Ideal candidates will hold a degree in BE/B Tech/MCA/M Tech. Knowledge of Couchbase is considered an added advantage. About GroundHog: GroundHog specializes in Mine Digitization and Automation software platforms. Our team is composed of young and dynamic individuals who collaborate within a fast-paced, flat organizational structure. We take pride in fostering a strong family culture and delivering innovative solutions to our clients. Our creative solutions continually challenge the boundaries of what is achievable in the industry. Established in 2010, GroundHog has a global presence with offices in the USA, India, and Australia, serving customers worldwide. Our areas of expertise include Enterprise Mobility, iOS, Android, Apps, ERP Integration, UI/UX Design, Web, Software Design, Architecture, and industries such as Mining, Oil & Gas, Construction, and Base Metals.,
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
The ideal candidate for this role should have strong skills in AWS EMR, EC2, AWS S3, Cloud Formation Template, Batch data, and AWS Code Pipeline services. It would be an added advantage to have experience with EKS. As this is a hands-on role, the candidate will be expected to have good administrative knowledge of AWS EMR, EC2, AWS S3, Cloud Formation Template, and Batch data. Responsibilities include managing and deploying EMR Clusters, with a solid understanding of AWS account and IAM. The candidate should also have experience in administrative tasks related to EMR Persistent Cluster and Transient Cluster. It is essential for the candidate to possess a good understanding of AWS Cloud Formation, cluster setup, and AWS network. Hands-on experience with Infrastructure as Code for Deployment tools like Terraform is highly desirable. Additionally, experience in AWS health monitoring and optimization is required. Knowledge of Hadoop and Big Data will be considered as an added advantage for this position.,
Posted 3 days ago
7.0 - 11.0 years
0 Lacs
pune, maharashtra
On-site
Working at Tech Holding provides you with an opportunity to be part of a full-service consulting firm dedicated to delivering high-quality solutions and predictable outcomes to clients. Our team, comprising industry veterans with experience in both emerging startups and Fortune 50 firms, has developed a unique approach based on deep expertise, integrity, transparency, and dependability. We are currently seeking a Cloud Architect with at least 9 years of experience to assist in building functional systems that enhance customer experience. Your responsibilities will include: Monitoring & Observability: - Setting up and configuring Datadog and Grafana for comprehensive system metric monitoring and visualization. - Developing alerting systems to proactively identify and resolve potential issues. - Integrating monitoring tools with applications and infrastructure to ensure high observability. CI/CD: - Implementing and managing CI/CD pipelines using GitHub Actions, EKS, and Helm to automate build, test, and deployment processes. - Optimizing build times and deployment frequency to expedite development cycles. - Ensuring adherence to best practices for code quality, security, and compliance. Cloud Infrastructure: - Designing and overseeing the migration of Azure infrastructure to AWS with a focus on leveraging best practices and cloud-native technologies. - Managing and optimizing AWS and Azure environments, including cost management, resource allocation, and security. - Implementing and maintaining infrastructure as code (IaC) using tools like Terraform or AWS CloudFormation. Incident Management: - Implementing and managing incident response processes for efficient detection, response, and resolution of incidents. - Collaborating with development, operations, and security teams to identify root causes and implement preventative measures. - Maintaining incident response documentation and conducting regular drills to enhance readiness. Migration: - Leading the migration of ECS services to EKS while ensuring minimal downtime and data integrity. - Optimizing EKS clusters for performance and scalability. - Implementing best practices for container security and management. CDN Management: - Managing and optimizing the Akamai CDN solution to efficiently deliver content. - Configuring CDN settings for caching, compression, and security. - Monitoring CDN performance and troubleshooting issues. Technology Stack: - Proficiency in Python or Go for scripting and automation. - Experience with Mux Enterprise for reporting, monitoring, and alerting. - Familiarity with relevant technologies and tools such as Kubernetes, Docker, Ansible, and Jenkins. Qualifications: - Bachelor's degree in computer science, engineering, or a related field. - Minimum of 7 years of experience in DevOps or a similar role. - Strong understanding of cloud platforms (AWS and Azure) and their services. - Expertise in Python or Go Lang and monitoring/observability tools (Datadog, Grafana). - Proficiency in CI/CD pipelines and tools (GitHub Actions, EKS, Helm). - Experience with infrastructure as code (IaC) tools (Terraform, AWS CloudFormation). - Knowledge of containerization technologies (Docker, Kubernetes). - Excellent problem-solving, troubleshooting, and communication skills. - Ability to work independently and collaboratively within a team. Employee Benefits include flexible work timings, work from home options as needed, family insurance policy, various leave benefits, and opportunities for learning and development.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
You should have 5 to 8 years of good working experience in NLP. Your experience should include working with Text Classification, NER models for fine-tuning and zeroshot applications. Proficiency in Huggingface and Pytorch is required. Additionally, experience with Document Processing is expected. In Python / AI Engineering, you should be proficient in building re-usable and scalable data pipelines using Python. Experience with NoSQL databases, data analysis, ML/DL frameworks, web API frameworks, and unit testing frameworks is necessary. Experience with Microservices and Cloud technologies is essential, including Docker, ECS / EKS. Good experience with at least one cloud service provider, with AWS being preferable. General requirements for the role include the ability to handle large-scale unstructured data and a willingness to learn new frameworks to stay updated with the latest NLP developments. About Virtusa: Virtusa values teamwork, quality of life, and professional and personal development. As part of a global team of 30,000 professionals, you can expect exciting projects, opportunities to work with state-of-the-art technologies, and support for your growth throughout your career with Virtusa. At Virtusa, collaboration and a team-oriented environment are highly valued. We provide a dynamic space for great minds to nurture new ideas and strive for excellence.,
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
As a Cloud DevOps Engineer at Amdocs, you will be responsible for the design, development, modification, debugging, and maintenance of software systems. Your key responsibilities will include production support activities such as monitoring, triaging, root cause analysis, and reporting production incidents. You will also investigate issues reported by clients, manage Reportdb servers, provide access management to new users on Reportdb, and work with the stability team to enhance Watchtower alerts. Additionally, you will be involved in working with cronjobs and scripts used to dump and restore from ProdDb, handling non-production deployments in Azure DevOps as per requests, and creating Kibana dashboards. Your technical skills should include experience in AWS DevOps, EKS, EMR, strong proficiency in Docker and Dockerhub, expertise in Terraform and Ansible, and good exposure to Git and Bitbucket. It would be advantageous if you have knowledge and experience in Kubernetes, Docker, and other cloud-related technologies. Cloud experience working with VMs and Azure storage, as well as sound data engineering experience, would be considered a plus. In terms of behavioral skills, you should possess good communication abilities, strong problem-solving skills, and the ability to build relationships with clients, operational managers, and colleagues. Furthermore, you should be able to adapt, prioritize, work under pressure, and meet deadlines. Your innovative approach, presentation skills, and willingness to work in shifts or extended hours will be valuable assets in this role. By joining our team, you will be challenged to design and develop new software applications and have the opportunity to work in a growing organization with vast opportunities for personal growth.,
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for EKS (Elastic Kubernetes Service) professionals in India is rapidly growing as more companies are adopting cloud-native technologies. EKS is a managed Kubernetes service provided by Amazon Web Services (AWS), allowing users to easily deploy, manage, and scale containerized applications using Kubernetes.
These cities are known for their strong technology sectors and have a high demand for EKS professionals.
The average salary range for EKS professionals in India varies based on experience levels: - Entry-level: ₹4-6 lakhs per annum - Mid-level: ₹8-12 lakhs per annum - Experienced: ₹15-25 lakhs per annum
A typical career path in EKS may include roles such as: - Junior EKS Engineer - EKS Developer - EKS Administrator - EKS Architect - EKS Consultant
Besides EKS expertise, professionals in this field are often expected to have knowledge or experience in: - Kubernetes - Docker - AWS services - DevOps practices - Infrastructure as Code (IaC)
As you explore opportunities in the EKS job market in India, remember to showcase your expertise in EKS, Kubernetes, and related technologies during interviews. Prepare thoroughly, stay updated with industry trends, and apply confidently to secure exciting roles in this fast-growing field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough