Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Join "Simeio" and help to shape the future of our workforce. We're looking for a driven and resourceful Talent Acquisition Specialist to help us find and attract top-tier talent across the organization. If you're passionate about people, skilled at sourcing, and ready to make an impact, we want to hear from you. What You’ll Do Partner with hiring managers to understand open roles and hiring needs. Be inquisitive, ask tough questions and be willing to get a good understanding of what we are hiring for before you begin your search Create compelling job postings that reflect the role and our company culture. Proactively source candidates using LinkedIn, Naukri, other platforms and through referrals. You will be the 1st person people speak to as part of a recruitment process at Simeio. You will screen resumes and conduct initial phone interviews to identify top candidates. Make the candidates feel comfortable and sell them the opportunity of joining Simeio Coordinate interviews, provide candidate updates, and support a smooth hiring process. Maintain accurate candidate records and ensure timely feedback communication throughout the process. Help improve our interview process and ensure we’re asking the right questions. Support employer branding efforts and recruitment campaigns across social channels. Attend job fairs and virtual hiring events to build relationships with potential candidates. Track hiring metrics and share insights to help optimize recruitment efforts. Build and nurture talent pools for future hiring needs. What You’ll Bring/ Required Skills A thirst for learning! We want to help you be the best you possibly can be – but you need to want to learn, collaborate and enjoy what you do. Experience in full cycle recruiting – either in another in-house talent acquisition role or in a recruitment agency Demonstrable track record of proactive recruitment. We don’t want you just reacting to incoming applications you need to be out there hunting for profiles and looking for hard to find candidates for Simeio A background in a technology focused organization or environment – if you’ve knowledge of Identity & Access Management – let us know! High activity drive to get on the phone, send lots of messages and challenge yourself to achieve high performance You need to be organized – both from a personal point of view and also when handling processes with internal and external stakeholders Comfortable with using ATS platforms and LinkedIn Recruiter (or similar tools). Internally we use JazzHR – if you’ve used that it’s a bonus Passion for delivering an excellent candidate experience and building long term relationships with candidates. Other Basic Requirements Excellent communication skills, both written and verbal English Need to be competent with Microsoft Office suite of products – Teams, Word, Excel, PowerPoint etc…. Ability to assess a candidate’s fit—skills, personality, and potential—for specific roles and team environments Able to collaborate with hiring managers of all levels within the organization and represent the company professionally when you are dealing with external candidates, vendors or service providers We don’t move slowly……so you need to be comfortable working in a fast-paced, people-focused environment About Simeio Simeio has over 650 talented employees across the globe. We have offices in USA (Atlanta HQ and Texas), India, Canada, Costa Rica and UK. Founded in 2007, and now backed by private equity company ZMC, Simeio is recognized as a top IAM provider by industry analysts. Alongside Simeio’s identity orchestration tool ‘Simeio IO’ - Simeio also partners with industry leading IAM software vendors to provide access management, identity governance and administration, privileged access management and risk intelligence services across on-premise, cloud, and hybrid technology environments. Simeio provides services to numerous Fortune 1000 companies across all industries including financial services, technology, healthcare, media, retail, public sector, utilities and education. Simeio is an equal opportunity employer. If you require assistance with completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employee selection process, please direct your inquiries to our recruitment team - [email protected]. Thank you About Your Application We review every application received and will get in touch if your skills and experience match what we’re looking for. If you don’t hear back from us within 10 days, please don’t be too disappointed – we may keep your CV on our database for any future vacancies and we would encourage you to keep an eye on our career opportunities as there may be other suitable roles. Simeio is an equal opportunity employer. If you require assistance with completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employee selection process, please direct your inquiries to any of the recruitment team at [email protected] or +1 404-882-3700. Simeio is an equal opportunity employer. If you require assistance with completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employee selection process, please direct your inquiries to any of the recruitment team at recruitment@simeio.com or +1 404-882-3700. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Gurgaon
On-site
About the Role: Grade Level (for internal use): 10 S&P Global Commodity Insights The Role: Senior Cloud Engineer The Location: Hyderabad, Gurgaon The Team: The Cloud Engineering Team is responsible for designing, implementing, and maintaining cloud infrastructure that supports various applications and services within the S&P Global Commodity Insights organization. This team collaborates closely with data science, application development, and security teams to ensure the reliability, security, and scalability of our cloud solutions. The Impact: As a Cloud Engineer, you will play a vital role in deploying and managing cloud infrastructure that supports our strategic initiatives. Your expertise in AWS and cloud technologies will help streamline operations, enhance service delivery, and ensure the security and compliance of our environments. What’s in it for you: This position offers the opportunity to work on cutting-edge cloud technologies and collaborate with various teams across the organization. You will gain exposure to multiple S&P Commodity Insights Divisions and contribute to projects that have a significant impact on the business. This role opens doors for tremendous career opportunities within S&P Global. Responsibilities: Design and deploy cloud infrastructure using core AWS services such as EC2, S3, RDS, IAM, VPC, and CloudFront, ensuring high availability and fault tolerance. Deploy, manage, and scale Kubernetes clusters using Amazon EKS, ensuring high availability, secure networking, and efficient resource utilization. Develop secure, compliant AWS environments by configuring IAM roles/policies, KMS encryption, security groups, and VPC endpoints. Configure logging, monitoring, and alerting with CloudWatch, CloudTrail, and GuardDuty to support observability and incident response. Enforce security and compliance controls via IAM policy audits, patching schedules, and automated backup strategies. Monitor infrastructure health, respond to incidents, and maintain SLAs through proactive alerting and runbook execution. Collaborate with data science teams to deploy machine learning models using Amazon SageMaker, managing model training, hosting, and monitoring. Automate and schedule data processing workflows using AWS Glue, Step Functions, Lambda, and EventBridge to support ML pipelines. Optimize infrastructure for cost and performance using AWS Compute Optimizer, CloudWatch metrics, auto-scaling, and Reserved Instances/Savings Plans. Write and maintain Infrastructure as Code (IaC) using Terraform or AWS CloudFormation for repeatable, automated infrastructure deployments. Implement disaster recovery, backups, and versioned deployments using S3 versioning, RDS snapshots, and CloudFormation change sets. Set up and manage CI/CD pipelines using AWS services like CodePipeline, CodeBuild, and CodeDeploy to support application and model deployments. Manage and optimize real-time inference pipelines using SageMaker Endpoints, Amazon Bedrock, and Lambda with API Gateway to ensure reliable, scalable model serving. Support containerized AI workloads using Amazon ECS or EKS, including model serving and microservices for AI-based features. Collaborate with SecOps and SRE teams to uphold security baselines, manage change control, and conduct root cause analysis for outages. Participate in code reviews, design discussions, and architectural planning to ensure scalable and maintainable cloud infrastructure. Maintain accurate and up-to-date infrastructure documentation, including architecture diagrams, access control policies, and deployment processes. Collaborate cross-functionally with application, data, and security teams to align cloud solutions with business and technical goals. Stay current with AWS and AI/ML advancements, suggesting improvements or new service adoption where applicable. What We’re Looking For: Strong understanding of cloud infrastructure, particularly AWS services and Kubernetes. Proven experience in deploying and managing cloud solutions in a collaborative Agile environment. Ability to present technical concepts to both business and technical audiences. Excellent multi-tasking skills and the ability to manage multiple projects under tight deadlines. Basic Qualifications: BA/BS in computer science, information technology, or a related field. 5+ years of experience in cloud engineering or related roles, specifically with AWS. Experience with Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Knowledge of container orchestration and microservices architecture. Familiarity with security best practices in cloud environments. Preferred Qualifications: Extensive Hands-on Experience with AWS Services. Excellent problem-solving skills and the ability to work independently as well as part of a team. Strong communication skills and the ability to influence stakeholders at all levels. Experience with greenfield projects and building cloud infrastructure from scratch. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 315801 Posted On: 2025-06-05 Location: Hyderabad, Telangana, India
Posted 1 week ago
5.0 years
4 - 9 Lacs
Gurgaon
On-site
Do you want to work on complex and pressing challenges—the kind that bring together curious, ambitious, and determined leaders who strive to become better every day? If this sounds like you, you’ve come to the right place. Your Impact You’ll join our office as part of the Cybersecurity pillar in the IAM domain. The IAM domain is responsible for developing and operating IAM services globally across the organization for both internal and external facing services. Your team members are located across the world in different time zones (Belgium, Czech Republic, Germany, India, USA, Costa Rica). As the Senior Security Engineer, you will be responsible for ensuring security and compliance of our organization's critical resources. This includes implementing and administering identity and access management solutions, managing user accounts, and collaborating with stakeholders to define and implement IAM access policies and standards. You'll also provide technical support and training to end-users, as well as monitor and analyze IAM-related logs and reports. In this role, you will develop capabilities which will serve as baseline for our firm users and improving IAM experience through executing our IAM Strategy. You will drive, contribute, and learn multiple technology skills (Identity orchestration, Access management Identity Governance) based on our IAM toolkit. Through innovative software as a solution to service, and a vibrant ecosystem of alliances, we are redefining what it means to work with McKinsey. Your Growth You are someone who thrives in a high-performance environment, bringing a growth mindset and entrepreneurial spirit to tackle meaningful challenges that have a real impact. In return for your drive, determination, and curiosity, we’ll provide the resources, mentorship, and opportunities to help you quickly broaden your expertise, grow into a well-rounded professional, and contribute to work that truly makes a difference. When you join us, you will have: Continuous learning: Our learning and apprenticeship culture, backed by structured programs, is all about helping you grow while creating an environment where feedback is clear, actionable, and focused on your development. The real magic happens when you take the input from others to heart and embrace the fast-paced learning experience, owning your journey. A voice that matters: From day one, we value your ideas and contributions. You’ll make a tangible impact by offering innovative ideas and practical solutions. We not only encourage diverse perspectives, but they are critical in driving us toward the best possible outcomes. Global community: With colleagues across 65+ countries and over 100 different nationalities, our firm’s diversity fuels creativity and helps us come up with the best solutions. Plus, you’ll have the opportunity to learn from exceptional colleagues with diverse backgrounds and experiences. Exceptional benefits: In addition to a competitive salary (based on your location, experience, and skills), we offer a comprehensive benefits package, including medical, dental, mental health, and vision coverage for you, your spouse/partner, and children. Your qualifications and skills 5+ years of Identity and Access Management (IAM) experience, including 3+ years with Okta Workflows, development, and onboarding. Expertise in Single Sign-On (SSO), federated authentication (SAML/OIDC), and SCIM integration for identity provisioning. Hands-on experience with Okta SSO, integrations, scripting, automation (e.g., inline/event hooks), and MFA configuration. Proficiency in API usage (Okta Native API or similar), troubleshooting, and automation with Terraform. Strong programming/scripting skills (Python, Java, Go, or Crepe Script) and experience with Git and CI/CD pipelines. Ability to create technical diagrams (e.g., sequence, context, swimlane) for authentication flows. Knowledge of access control policies, secure coding principles, and IAM security best practices. Proven experience with Okta Identity Engine, Identity Threat Protection, and device access solutions. Strong client communication and presentation skills for technical and non-technical audiences.
Posted 1 week ago
4.0 years
0 - 0 Lacs
Mohali
On-site
4+ years of experience in System Administration with DevOps experience in AWS, having knowledge in Azure, GCP is an advantage. AWS: experience with core services like EC2, S3, VPC, EBS, IAM, RDS, and Route 53, Cost & Performance Optimisation GoDaddy: Shared hosting, VPS environments, cPanel, backup solutions, DNS, and SSL certificates. DevOps Skills: Managing code pipelines using platforms like Bitbucket and Github. System Administration: Server upgrades, security patching, and backup management. Define & configure AWS and Azure instances installing and configuring software, hardware, and networks, ensuring the smooth operation of IT services Maintain servers, web services, and operating systems. Security Systems Administrators: Basic security monitoring of systems. Configuring and maintaining adequate security parameters. Cloud security: Amazon AWS, Windows Azure O365 Administration, Azure configuration is a solid plus. Experience with O365 migration and configuration Hands-on experience in LAN/WAN/Firewall. Expertise in virtualization technologies: Setup VPNs and Virtual Servers Automation Scripting in shell, etc. Solid experience configuring AWS EC2 instances, network configurations, Route53, Cloudflare, S3 buckets, RDS, etc. Deploy software on Laptops and Mac Experience with SSL install & configuration on godaddy and AWS must. Cloudflare SSL definite plus. configure website hosting on Godaddy Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹45,000.00 per month Benefits: Paid time off Schedule: Fixed shift Monday to Friday Application Question(s): Are you ready to join within 1 day? Are you local from mohali? Work Location: In person Speak with the employer +91 8847334396
Posted 1 week ago
12.0 years
0 Lacs
New Delhi, Delhi, India
On-site
WinZO is India’s largest social gaming platform aiming at building an astronomical tech strong gaming ecosystem in India. WinZO in a short span of time has emerged as the leanest Series C funded gaming startup in the Indian startup ecosystem. WinZO with a data driven DNA is working towards becoming the one-stop-shop for online gaming users spread across every household in Bharat. With a vision of becoming a household name for Bharat, catering to their entertainment needs through interactive engagements, Paavan Nanda (co-Founder, WinZO, Zostel & ZO Rooms) and Saumya Singh Rathore (co-Founder, WinZO, Ex-Chief of Staff & Growth- ZO Rooms, Zostel, Ex-Times Group), are aggressively building the platform to not just capture market opportunities but also explore and maximize potential of social interactions as consumption drivers. Both of them are putting together WinZO piece by piece using tech and data to create a transparent and unique gaming experience for its users. WinZO, which hosts 100+ games in 12+ languages , has 80% users consuming the app in vernacular languages. WinZO has always yearned to mentor, guide and onboard games to be culturally relevant for Bharat. It also provides opportunities for housewives to translate and earn which empowers them economically. A 150+ members strong team with stellar professionals coming from global tech giants and companies such as Google, Amazon, Flipkart, McKinsey etc., WinZO is funded and backed by global gaming and entertainment investment funds such as Griffin Gaming Partners, Maker’s Fund, Courtside Ventures, Pags Group and Kalaari . WinZO is continually working towards revolutionizing the gaming ecosystem by creating a complete entertainment package through a slew of interactive features. Speaking of the larger picture the platform is driving unique initiatives that are constantly attempting to nurture and groom developers. WinZO Values: Integrity, Excellence, Perseverance, Fine Judgement, Agility About the Role: As a Devops engineering team, you will own and optimize the backbone of WinZO’s technology – the infrastructure. You'll be managing ultra-high scale distributed systems, engineering zero-downtime deployment pipelines, and driving automation across our platform. This is a hands-on leadership position where your technical and architectural decisions will directly influence our ability to scale, secure, and deliver product innovation at speed. If you thrive on solving complex infrastructure challenges at scale, this is your next big leap. What you will do: Architect and evolve high-performance, resilient server infrastructure End-to-end ownership of system reliability: uptime, latency, scalability, and incident management Lead infrastructure automation using IaC tools like Terraform, Ansible, or equivalent Define and enforce DevSecOps best practices, integrating security deep into the deployment lifecycle Optimize Cloud spend while ensuring high availability, performance, and compliance Coach and mentor a team of DevOps to scale their impact and skillsets Proactively identify system bottlenecks and architect solutions for scale Collaborate with Engineering, Product, and Business teams on infra-strategy and technical roadmap Responsible for improvement of deployment frequency, lead time, and reliability What you should have: 12+ years of experience in DevOps/SRE/Infra roles, with at least 3+ years in a lead capacity Hands-on experience with Cloud (AWS Preferred) at scale (multi-region, auto-scaling, failover) Expertise in IaC tool- Terraform Fluency in Linux administration, containerization (Docker), and orchestration (Kubernetes, ECS) Proficient in one or more scripting languages – Python, Bash, or Go Deep understanding of networking (VPNs, DNS, Load Balancers, Firewalls) and security (IAM, VPC, WAF) Design, implement, and manage scalable CI/CD pipelines using tools like Jenkins or GitHub Actions. Practical experience managing relational and NoSQL databases such as MySQL, PostgreSQL, MongoDB, Redis, Cassandra, or DynamoDB. Familiarity with database monitoring, backups, failover strategies, and performance tuning Experience with monitoring & observability tools (Prometheus, Grafana, ELK, , etc.) Familiar with designing systems for cost-optimization, and performance tuning Exposure to chaos engineering and resilience testing frameworks. Experience with canary deployments, blue green deployments and traffic management tools (e.g., Argo Rollouts, Flagger, Istio). Preferred to have: Cloud Certifications (AWS preferred) What we offer you: A flat and transparent culture with an incredibly high learning curve A swanky informal workspace which defines our open and vibrant work culture Opportunity to solve new and challenging problems with a high scope of innovation Complete ownership of the product and chance to conceptualize and implement your solutions Opportunity to work with incredible peers across departments and be a part of the Tech revolution Most importantly, a chance to be associated with big impact early in your career Show more Show less
Posted 1 week ago
5.0 years
4 - 5 Lacs
Bengaluru
On-site
We are seeking a highly skilled and motivated System Ops Engineer with a strong background in computer science or statistics, and at least 5 years of professional experience. The ideal candidate will possess deep expertise in cloud computing (AWS), data engineering, Big Data applications, AI/ML, and SysOps. A strong technical foundation, proactive mindset, and ability to work in a fast-paced environment are essential. Key Responsibilities: Cloud Expertise: Proficient in AWS services including EC2, VPC, Lambda, DynamoDB, API Gateway, EBS, S3, IAM, and more. Design, implement, and maintain scalable, secure, and efficient cloud-based solutions. Execute optimized configurations for cloud infrastructure and services. Data Engineering: Develop, construct, test, and maintain data architectures, such as databases and processing systems. Write efficient Spark and Python code for data processing and manipulation. Administer and manage multiple ETL applications, ensuring seamless data flow. Big Data Applications: Lead end-to-end Big Data projects, from design to deployment. Monitor and optimize Big Data systems for performance, reliability, and scalability. AI/ML Applications: Hands-on experience developing and deploying AI/ML models, especially in Natural Language Processing (NLP), Computer Vision (CV), and Generative AI (GenAI). Collaborate with data scientists to productionize ML models and support ongoing model performance tuning. DevOps and IaaS: Utilize DevOps tools for continuous integration and deployment (CI/CD). Design and maintain Infrastructure as a Service (IaaS) ensuring scalability, fault-tolerance, and automation. SysOps Responsibilities: Manage servers and network infrastructure to ensure system availability and security. Configure and maintain virtual machines and cloud-based system environments. Monitor system logs, alerts, and performance metrics. Install and update software packages and apply security patches. Troubleshoot network connectivity issues and resolve infrastructure problems. Implement and enforce security policies, protocols, and procedures. Conduct regular data backups and disaster recovery tests. Optimize systems for speed, efficiency, and reliability. Collaborate with IT and development teams to support integration of new systems and applications Qualifications: Bachelor's degree in Computer Science, Statistics, or a related field. 5+ years of experience in cloud computing, data engineering, and related technologies. In-depth knowledge of AWS services and cloud architecture. Strong programming experience in Spark and Python. Proven track record in Big Data applications and pipelines. Applied experience in AI/ML models, particularly in NLP, CV, and GenAI domains. Skilled in managing and administering ETL tools and workflows. Experience with DevOps pipelines, CI/CD tools, and cloud automation. Demonstrated experience with SysOps or cloud infrastructure/system operations.
Posted 1 week ago
4.0 years
0 Lacs
India
On-site
As a ForgeRock Developer, you will design, develop, and implement solutions using ForgeRock Platform. You will be responsible to integrate ForgeRock with various enterprise systems such as HRIS, ERP, and CRM applications and automate user provisioning & de-provisioning processes using ForgeRock's automation capabilities. Your role will involve working closely with business stakeholders to understand their IAM requirements and translate them into technical solutions. You will design and implement the architecture of the entire process which will be responsible for the decommissioning of legacy application You will integrate ForgeRock with various enterprise applications like HRIS, ERP, and CRM You will create automated workflows to manage user lifecycle processes efficiently. You will participate on the migration deliverables from ForgeRock SaaS to On-Premises You will effectively communicate and demonstrate ForgeRock’s capabilities to stakeholders. You will create and maintain insightful reports and audit trails to monitor identity and access activities and demonstrate compliance. You will write custom journeys on ForgeRock AM You will write custom Scripts on ForgeRock AM You will provide post-implementation support, troubleshoot issues, and conduct user training to ensure successful system adoption and ongoing operations. Desired qualifications Minimum bachelor’s degree in computer science, Engineering, or related field. Minimum 4+ years of professional work experience in Identity and Access Governance (IAG) domain and ForgeRock domain Should be well-versed with ForgeRock AM - Journeys, Scripts, OAuth2 Clients Good understanding of OAuth, SAML, LDAP and OIDC protocols Good understanding of ForgeRock AM (journeys, Realms, Scripts) Able to write custom policies and scripts Assist the developers in understanding the ForgeRock modules Understanding of ForgeRock IDM and DS Strong IAM fundamentals Understanding of Java/JavaScript/GoLang is appreciated Show more Show less
Posted 1 week ago
0 years
0 Lacs
India
On-site
Your work profile - SailPoint Developer [Deputy Manager] As a SailPoint Developer in our Cyber: Identity Team, you’ll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations: - As a SailPoint Developer, you will design, develop, and implement identity and access management (IAM) solutions using SailPoint IdentityIQ. You will be responsible for integrating SailPoint with various enterprise systems, automating user provisioning and de-provisioning processes, and ensuring compliance with security and regulatory standards. Your role will involve working closely with business stakeholders to understand their requirements and translating them into technical solutions. 1. You will integrate SailPoint with core systems such as ServiceNow, Active Directory, LDAP, PAM, and other identified applications. 2. You will deliver IGA processes including recertification, joiner/mover/leaver (JML), access request, segregation of duties, and role-based access control (RBAC). 3. You will configure connectors to onboard applications, utilizing both out-of-the-box connectors like Web Service, JDBC, RACF/ACF2 and custom options. 4. You will develop and maintain comprehensive access policies and rules within SailPoint to enforce access controls and mitigate risks. 5. You will design and implement automated workflows and lifecycle management processes to streamline identity lifecycle events and improve efficiency. 6. You will effectively communicate and demonstrate SailPoint's capabilities to stakeholders. 7. You will create and maintain insightful reports and audit trails to monitor identity and access activities and demonstrate compliance. 8. You will collaborate with security teams to identify and address potential vulnerabilities and threats related to identity and access management. 9. You will provide post-implementation support, troubleshoot issues, and conduct user training to ensure successful system adoption and ongoing operations. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
India
On-site
Role: Sailpoint Identity Now/ISC Location: PAN India Desired qualifications Minimum bachelor’s degree in computer science, Engineering, or related field. Minimum 4 years of professional work experience in Identity and Access Governance (IAG) domain and SailPoint IdentityIQ/Identity NOW Deep expertise in SailPoint IdentityIQ platform, including advanced configuration, customization, and development. In-depth knowledge of identity and access management (IAM) principles, frameworks, and industry best practices. Strong understanding of application onboarding and migration processes within the SailPoint platform. Effective communication skills to collaborate with technical and non-technical stakeholders. Strong interpersonal skills to build relationships and work effectively in a team environment. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
India
On-site
Your work profile As a Consultant in our Cyber: Identity Team, you’ll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations: - You will integrate SailPoint with core systems such as ServiceNow, Active Directory, LDAP, PAM, and other identified applications. You will deliver IGA processes including recertification, joiner/mover/leaver (JML), access request, segregation of duties, and role-based access control (RBAC). You will configure connectors to onboard applications, utilizing both out-of-the-box connectors. like Web Service, JDBC, RACF/ACF2 and custom options. You will develop and maintain comprehensive access policies and rules within SailPoint to enforce access controls and mitigate risks. You will design and implement automated workflows and lifecycle management processes to streamline identity lifecycle events and improve efficiency. You will effectively communicate and demonstrate SailPoint's capabilities to stakeholders. You will create and maintain insightful reports and audit trails to monitor identity and access activities and demonstrate compliance. You will collaborate with security teams to identify and address potential vulnerabilities and threats related to identity and access management. You will provide post-implementation support, troubleshoot issues, and conduct user training to ensure successful system adoption and ongoing operations. Desired qualifications Minimum bachelor’s degree in computer science, Engineering, or related field. Minimum 4 years of professional work experience in Identity and Access Governance (IAG) domain and SailPoint IdentityIQ Deep expertise in SailPoint IdentityIQ platform, including advanced configuration, customization, and development. In-depth knowledge of identity and access management (IAM) principles, frameworks, and industry best practices. Strong understanding of application onboarding and migration processes within the SailPoint platform. Effective communication skills to collaborate with technical and non-technical stakeholders. Strong interpersonal skills to build relationships and work effectively in a team environment. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Chennai
On-site
The Security Platform Engineer – Authentication position is an opportunity to join the Security Platform team that provides Authentication Services for applications with B2E/B2B/B2C users, keeps on improving the Security Posture across the enterprise. The person in this position will need to be detailed oriented and have excellent communications skills to play an essential role in developing, building, and rolling out Authentication solutions in a fast-paced global automotive enterprise. This is a hybrid position requires working on-site minimum 50% of the time. Basic Qualifications: Bachelor’s degree in computer science/information security or related fields 5+ years of IT working experiences 3+ years of Security/IAM/Authentication experiences with ADFS and Entra ID 2+ years of working experiences with Entra ID Application registration and SSO configuration 2+ years of working with MSFT Graph API 2+ years of Active Directory administration experience 1+ years of implementing OAuth in Google Cloud Platform (GCP) environment Expert of utilizing Microsoft Power Platform (Power Automate, Power Apps, and AIBuilder) Strong knowledge of Active Directory and Authentication Strong coding skills required, hands on experience of PowerShell script programming preferred Our preferred requirements Ability to communicate effectively with all levels of employees and management Highly motivated individual with strong technical skills and desire to resolve problems Strong written and verbal communication skills with attention to detail and quality Solid understanding in Security domains, especially Identity and Access Management Hands on experience of developing and management of RESTful API GitHub knowledge Certifications: AZ-500: Microsoft Azure Security Engineer Associate certified Skills & Responsibilities Extended knowledge and working experiences with various authentication methods and technologies Strong knowledge of Federation Protocols (OIDC, SAML etc.) Strong understanding of different types of multi-factor authentication Design, implement, and roll out new MFA capabilities Analyze the use cases and requirements from applications, systems, and tools across the enterprise to design & implement Authentication solutions Provide end-to-end (DevOps) support for Authentication/MFA solutions On-Call rotations to provide 24x7 coverage Extended knowledge and working experiences with Entra ID platform Google Cloud Platform (GCP) OAuth implementation Develop PowerShell based scripts for administration Develop automation and strategies to manage Entra ID configurations via API Follow Agile process, using Jira for backlog management
Posted 1 week ago
0 years
0 Lacs
Chennai
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. IDN JD :- Ability to drive IGA projects including project management, team management, client management. Should have experience in implementing at least one complete IAM SDLC engagements projects. This must include activities such as requirements gathering, analysis, design, development, testing, deployment and application support Must have experience in SailPoint IDN architecture, design, development, configuration, testing, integration, and deployment. Must have experience in source/ Identity profile configuration, transform/rule development, workflow configuration, JML process and certification, Password sync. Must have experience in onboarding AD, JDBC and web service applications to IDN Must have experience in deployment of virtual appliance, understanding different VA architecture, deploy rules in cloud/VA, High availability/DR configuration Configuration and deployment of IQService in cloud and on-prem environments Professional IT certifications are a plus. Excellent verbal and written communication skills. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
India
Remote
Python JD: Role Summary: We are seeking a skilled Python Developer with strong experience in data engineering, distributed computing, and cloud-native API development. The ideal candidate will have hands-on expertise in Apache Spark, Pandas, and workflow orchestration using Airflow or similar tools, along with deep familiarity with AWS cloud services. You’ll work with cross-functional teams to build, deploy, and manage high-performance data pipelines, APIs, and ML integrations. Key Responsibilities: Develop scalable and reliable data pipelines using PySpark and Pandas. Orchestrate data workflows using Apache Airflow or similar tools (e.g., Prefect, Dagster, AWS Step Functions). Design, build, and maintain RESTful and GraphQL APIs that support backend systems and integrations. Collaborate with data scientists to deploy machine learning models into production. Build cloud-native solutions on AWS, leveraging services like S3, Glue, Lambda, EMR, RDS, and ECS. Support microservices architecture with containerized deployments using Docker and Kubernetes. Implement CI/CD pipelines and maintain version-controlled, production-ready code. Required Qualifications: 3–5 years of experience in Python programming with a focus on data processing. Expertise in Apache Spark (PySpark) and Pandas for large-scale data transformations. Experience with workflow orchestration using Airflow or similar platforms. Solid background in API development (RESTful and GraphQL) and microservices integration. Proven hands-on experience with AWS cloud services and cloud-native architectures. Familiarity with containerization (Docker) and CI/CD tools (GitHub Actions, CodeBuild, etc.). Excellent communication and cross-functional collaboration skills. Preferred Skills: Exposure to infrastructure as code (IaC) tools like Terraform or CloudFormation. Experience with data lake/warehouse technologies such as Redshift, Athena, or Snowflake. Knowledge of data security best practices, IAM role management, and encryption. Familiarity with monitoring/logging tools like Datadog, CloudWatch, or Prometheus. Pyspark, Pandas, Data Transformation or Workflow experience is a MUST atleast 2 years Pay: Attractive Salary Interested candidate can call or whats app the resume @ 9092626364 Job Type: Full-time Benefits: Cell phone reimbursement Work from home Schedule: Day shift Weekend availability Work Location: In person
Posted 1 week ago
10.0 years
0 Lacs
Noida
On-site
Company Summary: DISH Network Technologies India Pvt. Ltd is a technology subsidiary of EchoStar Corporation. Our organization is at the forefront of technology, serving as a disruptive force and driving innovation and value on behalf of our customers. Our product portfolio includes Boost Mobile (consumer wireless), Boost Mobile Network (5G connectivity), DISH TV (Direct Broadcast Satellite), Sling TV (Over The Top service provider), OnTech (smart home services), Hughes (global satellite connectivity solutions) and Hughesnet (satellite internet). Our facilities in India are some of EchoStar’s largest development centers outside the U.S. As a hub for technological convergence, our engineering talent is a catalyst for innovation in multimedia network and communications development. Summary: Boost Mobile is our cutting-edge, standalone 5G broadband network that covers over 268 million Americans and a brand under EchoStar Corporation (NASDAQ: SATS). Our mobile carrier’s cloud-native O-RAN 5G network delivers lightning-fast speeds, reliability, and coverage on the latest 5G devices. Recently, Boost Mobile was named as the #1 Network in New York City, according to umlaut’s latest study! Job Duties and Responsibilities: Key Responsibilities: Manages public cloud infrastructure deployments, handles Jira, and troubleshoots Leads DevOps initiatives for US customers, focusing on AWS and 5G network functions Develops technical documentation and supports root cause analysis Deploys 5G network functions in AWS environments Expertise in Kubernetes and EKS for container orchestration Extensive experience with AWS services (EC2, ELB, VPC, RDS, DynamoDB, IAM, CloudFormation, S3, CloudWatch, CloudTrail, CloudFront, SNS, SQS, SWF, EBS, Route 53, Lambda) Orchestrates Docker containers with Kubernetes for scalable deployments Automates 5G Application deployments using AWS CodePipeline (CodeCommit/CodeBuild/CodeDeploy) Implements and operates containerized cloud application platform solutions Focuses on cloud-ready, distributed application architectures, containerization, and CI/CD pipelines Works on automation and configuration as code for foundational architecture related to connectivity across Cloud Service Providers Designs, configures, and manages cloud infrastructures using AWS services Experienced with EC2, ELB, EMR, S3 CLI, and API scripting Strong knowledge of Kubernetes operational building blocks (Kube API, Kube Scheduler, Kube Controller Manager, ETCD) Provides solutions to common Kubernetes errors (CreateContainerConfigError, ImagePullBackOff, CrashLoopBackOff, Kubernetes Node Not Ready) Knowledgeable in Linux/UNIX administration and automation Familiar with cloud and virtualization technologies (Docker, Azure, AWS, VMware) Supports cloud-hosted systems 24/7, including troubleshooting and root cause analysis Configures Kubernetes clusters for networking, load balancing, pod security, and certificate management Configures monitoring tools (Datadog, Dynatrace, AppDynamics, ELK, Grafana, Prometheus) Participates in design reviews of architecture patterns for service/application deployment in AWS Skills - Experience and Requirements: Education and Experience: Bachelors or Master's degree in Computer Science, Computer Engineering, or a related technical degree 10+ years related experience; or equivalent combination of education and experience 4+ years of experience supporting public cloud platforms 4+ years of experience with cloud system integration, support, and automation Skills and Qualifications: Must have excellent verbal and written communication Operational experience with Infrastructure as code solutions and tools, such as Ansible, Terraform, and Cloudformation Deep understanding of DevOps and agile methodologies Ability to work well under pressure and manage tight deadlines Proven track record of operational process change and improvement Deep understanding of distributed systems and microservices AWS certifications (Associate level or higher) are a plus Kubernetes certifications are a plus
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About Madison Logic: Our team is reshaping B2B marketing and having fun in the process! When joining Madison Logic, you are committing to giving 100% and always striving for more. As a truly global company, we take pride in a diverse culture free from gender, racial, and other forms of bias. Our Vision: We empower B2B organizations globally to convert their best accounts faster Our Values: URGENCY Lead with Action. Prioritize Follow-up. ACCOUNTABILITY Don't Point Fingers. Take Responsibility. INNOVATION Think Big. Innovate. RESPECT Respect Customers. Respect Each Other. INTEGRITY Act Ethically. Lead by Example. At ML you will work with & learn from an incredible group of people who care about your success as much as they care about their own. Our team is at the heart of what we do and our success starts with you! About the Role: The Compliance Manager will support our information security and compliance programs. This role maintains, monitors, and improves our SOC 2 controls, helping drive security and privacy initiatives, and supports audits and customer compliance inquiries. The ideal candidate has hands-on experience in compliance frameworks, strong organizational skills, and a collaborative mindset to work cross-functionally with IT, Legal, and Sales teams. This is an Individual Contributor (non-management) position. Responsibilities: Coordinate and maintain activities to support SOC 2 compliance across the organization Manage day-to-day compliance operations, including monitoring control effectiveness, collecting evidence, and documenting processes Support responses to customer security questionnaires and due diligence requests Assist with internal audits and external assessments related to SOC 2 and related frameworks (e.g., GDPR, CCPA) Track and help remediate compliance and security risks Collaborate with Sales and Legal to review security-related contract terms and data processing agreements Assist in vendor risk assessments and third-party security reviews Maintain internal documentation for security practices, policies, and compliance initiatives Contribute to security awareness efforts and training initiatives across the organization Basic Qualifications: On-site working at the ML physical office, 5-days per week is required through the end of probation (6 months), transitioning to 2-day WFH post-probation. B.S. Degree in Computer Science or Computer Information Systems desired 5+ years of experience with the implementation and support of an IT Security program Prior experience developing IT Security and Data Governance policies 5+ years auditing experience in any of the following certification standards: GDPR / CCPA, SOC 2, ISO 27001, PCI, COBIT, NIST, CIS, HIPPA. Working knowledge of penetration testing tools, AWS network security and IAM, perimeter security, application firewalls, single sign-on, active directory policy, SIEM, anti-malware, VPN, email security, key management, incident management, risk assessment, log management, change management, backup, and disaster recovery, highly available and distributed infrastructures Working knowledge of data subject privacy rights, PII data handling, data protection and cookie laws, data transmission and encryption requirements, data access controls, data retention and destruction, vendor assessment questionnaires, data privacy impact assessments, data breach, and other cyber incident response Other Characteristics: Strong analytical skills Excellent organizational and time management skills, possessing the ability to prioritize work under pressure of time constraints Superior written and verbal communication skills Excellent presentation skills with prior experience presenting to executives to achieve buy-in Highly productive and resourceful with a “Can do” attitude Strong technical skills Team members are encouraged to work collaboratively with an emphasis on results, not on hierarchy or titles Expected Compensation: (Dependent upon Experience) Fixed CTC: 17 LPA - 20 LPA Work Environment: We offer a mix of in-office and hybrid working. Hybrid remote work arrangements are not available for all positions. Please refer to the job posting detail to determine what in-office requirements apply. Where applicable , hybrid WFH days work must be conducted from your home office located in a jurisdiction in which Madison Logic has the legal right to operate. WFH requires availability and responsiveness on a full-time basis from a distraction free environment with access to high-speed internet. Please inquire for more details. Pay Transparency/Equity: We are committed to paying our team equitably for their work, commensurate with their individual skills and experience . Salary Range and additional compensation, including discretionary bonuses and incentive pay, are determined by a rigorous review process taking into account the experience, education, certifications and skills required for the specific role, equity with similarly situated team members, as well as employer-verified region-specific market data provided by an independent 3rd party partner. We will provide more information about our perks & benefits upon request. Our Commitment to Diversity & Inclusion: Madison Logic is proud to be an equal opportunity employer. We are committed to equal employment opportunity regardless of sex, race, color, religion, national origin, sexual orientation, age, marital status, disability, gender identity or Veteran status. Privacy Disclosure: All of the information collected in this form and/or by your application by submission of your online profile is necessary and relevant to the performance of the job applied for. We will process the information provided by you in this form, your CV (including physical and online resume profiles), by the referees you have noted, and by the educational institutions with whom we may undertake to verify your qualifications with, in accordance with our privacy policy and for recruitment purposes only. For more information on how we process the information you have provided including relevant lawful bases (where relevant) please see our privacy policy which is available on our website ( https://www.madisonlogic.com/privacy/ ). Show more Show less
Posted 1 week ago
2.0 - 4.0 years
2 - 8 Lacs
Noida
On-site
Expertise in AWS services like EC2, CloudFormation, S3, IAM, SNS, SQS, EMR, Athena, Glue, lake formation etc. ? Expertise in Hadoop/EMR/DataBricks with good debugging skills to resolve hive and spark related issues. Sound fundamentals of database concepts and experience with relational or non-relational database types such as SQL, Key-Value, Graphs etc. Experience in infrastructure provisioning using CloudFormation, Terraform, Ansible, etc. Experience in programming languages such as Python/PySpark. Excellent written and verbal communication skills. Key Responsibilities Working closely with the Data lake engineers to provide technical guidance, consultation and resolution of their queries. Assist in development of simple and advanced analytics best practices, processes, technology & solution patterns and automation (including CI/CD) Working closely with various stakeholders in US team with a collaborative approach. Develop data pipeline in python/pyspark to be executed in AWS cloud. Set up analytics infrastructure in AWS using cloud formation templates. Develop mini/micro batch, streaming ingestion patterns using Kinesis/Kafka. Seamlessly upgrading the application to higher version like Spark/EMR upgrade. Participates in the code reviews of the developed modules and applications. Provides inputs for formulation of best practices for ETL processes / jobs written in programming languages such as PySpak and BI processes. Working with column-oriented data storage formats such as Parquet , interactive query service such as Athena and event-driven computing cloud service - Lambda Performing R&D with respect to the latest and greatest Big data in the market, perform comparative analysis and provides recommendations to choose the best tool as per the current and future needs of the enterprise. Required Qualifications Bachelors or Masters degree in Computer Science or similar field 2-4 years of strong expeirence in big data development Expertise in AWS services like EC2, CloudFormation, S3, IAM, SNS, SQS, EMR, Athena, Glue, lake formation etc. Expertise in Hadoop/EMR/DataBricks with good debugging skills to resolve hive and spark related issues. Sound fundamentals of database concepts and experience with relational or non-relational database types such as SQL, Key-Value, Graphs etc. Experience in infrastructure provisioning using CloudFormation, Terraform, Ansible, etc. Experience in programming languages such as Python/PySpark. Excellent written and verbal communication skills. Preferred Qualifications Cloud certification (AWS, Azure or GCP) About Our Company Ameriprise India LLP has been providing client based financial solutions to help clients plan and achieve their financial objectives for 125 years. We are a U.S. based financial planning company headquartered in Minneapolis with a global presence. The firm’s focus areas include Asset Management and Advice, Retirement Planning and Insurance Protection. Be part of an inclusive, collaborative culture that rewards you for your contributions and work with other talented individuals who share your passion for doing great work. You’ll also have plenty of opportunities to make your mark at the office and a difference in your community. So if you're talented, driven and want to work for a strong ethical company that cares, take the next step and create a career at Ameriprise India LLP. Ameriprise India LLP is an equal opportunity employer. We consider all qualified applicants without regard to race, color, religion, sex, genetic information, age, sexual orientation, gender identity, disability, veteran status, marital status, family status or any other basis prohibited by law. Full-Time/Part-Time Full time Timings (2:00p-10:30p) India Business Unit AWMPO AWMP&S President's Office Job Family Group Technology
Posted 1 week ago
0 years
5 - 7 Lacs
Ahmedabad
Remote
We’re looking for a proactive DevOps Engineer with strong hands-on experience in managing CI/CD pipelines, cloud infrastructure (preferably AWS), containerized environments (Docker/Kubernetes), and system administration. The ideal candidate will be well-versed in monitoring, scripting, security best practices, and working in agile environments. This role demands someone who can not only build and maintain robust infrastructure but also collaborate across teams and support production environments. What You’ll Be Doing Undertake ongoing management, maintenance and administration activity of remote server(s) for clients. Attend, manage and rectify technical support queries belonging to active managed hosting services contracts. Work on performance tuning, package pulling/installation, updates patch management, network and server management issues. Managing helpdesk/tickets and technical support operations for all clients along with planning of scheduled maintenance where ever required. Learn new technologies and convert them into customer solutions. Achieve successful onboarding of new clients onto the hosting infrastructure. Streamline deployment processes with automation to faster and secure deployment Diagnosing, troubleshooting, and rectification of various system resources, software components, or other network infrastructure related problems. Manage dedicated & virtual servers environment onboarding setups and assist applications deployments as-well-as migration. Mentoring Network and IT team on various aspects and maintaining ongoing assistance for all vital priorities. Constantly improve security practices, deployment and automation methodologies Maintain Health check report of IT-Infrastructure, Break-down reports and other analytics as required by management. Accountable for compliance of ISO and other security standards What We’d Love To See CI/CD Tools: GitLab CI, Jenkins, GitHub Actions Infrastructure as Code (IaC): Terraform, CloudFormation Containers: Docker, Kubernetes (EKS, AKS, or GKE preferred) Cloud Platforms: AWS (EC2, RDS, S3, IAM, CloudWatch, ALB) Linux/Unix System Administration & Shell Scripting Monitoring & Logging: Site24x7, Prometheus, Grafana, New Relic, CloudWatch Scripting: Bash, Python (or similar scripting language) Source Control: Git (GitHub/GitLab/Bitbucket) DevSecOps tools (OWASP, OpenVAS, Trivy) Understanding of system uptime, backup strategies, and rollback processes It’d Be Great If You Had Any Certification related DevOps is a plus Experience with GCP or Azure platforms Prior exposure to client communication, especially over calls Ability to coordinate across multiple teams for resolving infrastructure issues Familiar with DevSecOps: secure CI/CD, scanning, secrets management, and infra hardening Deep understanding of high-availability infrastructure setups and disaster recovery strategies Hands-on with performance optimization and cost-efficient architecture on cloud platforms
Posted 1 week ago
7.0 years
5 - 8 Lacs
Jaipur
On-site
ABOUT HAKKODA Hakkoda, an IBM Company, is a modern data consultancy that empowers data driven organizations to realize the full value of the Snowflake Data Cloud. We provide consulting and managed services in data architecture, data engineering, analytics and data science. We are renowned for bringing our clients deep expertise, being easy to work with, and being an amazing place to work! We are looking for curious and creative individuals who want to be part of a fast-paced, dynamic environment, where everyone’s input and efforts are valued. We hire outstanding individuals and give them the opportunity to thrive in a collaborative atmosphere that values learning, growth, and hard work. Our team is distributed across North America, Latin America, India and Europe. If you have the desire to be a part of an exciting, challenging, and rapidly-growing Snowflake consulting services company, and if you are passionate about making a difference in this world, we would love to talk to you!. As an AWS Managed Services Architect, you will play a pivotal role in architecting and optimizing the infrastructure and operations of a complex Data Lake environment for BOT clients. You’ll leverage your strong expertise with AWS services to design, implement, and maintain scalable and secure data solutions while driving best practices. You will work collaboratively with delivery teams across the U.S., Costa Rica, Portugal, and other regions, ensuring a robust and seamless Data Lake architecture. In addition, you’llproactively engage with clients to support their evolving needs, oversee critical AWS infrastructure, and guide teams toward innovative and efficient solutions. This role demands a hands-on approach, including designing solutions, troubleshooting,optimizing performance, and maintaining operational excellence. Role Description AWS Data Lake Architecture: Design, build, and support scalable, high-performance architectures for complex AWS Data Lake solutions. AWS Services Expertise: Deploy and manage cloud-native solutions using a wide range of AWS services, including but not limited to- Amazon EMR (Elastic MapReduce): Optimize and maintain EMR clusters for large-scale big data processing. AWS Batch: Design and implement efficient workflows for batch processing workloads. Amazon SageMaker: Enable data science teams with scalable infrastructure for model training and deployment. AWS Glue: Develop ETL/ELT pipelines using Glue to ensure efficient data ingestion and transformation. AWS Lambda: Build serverless functions to automate processes and handle event-driven workloads. IAM Policies: Define and enforce fine-grained access controls to secure cloud resources and maintain governance. AWS IoT & Timestream: Design scalable solutions for collecting, storing, and analyzing time-series data. Amazon DynamoDB: Build and optimize high-performance NoSQL database solutions. Data Governance & Security: Implement best practices to ensure data privacy, compliance, and governance across the data architecture. Performance Optimization: Monitor, analyze, and tune AWS resources for performance efficiency and cost optimization. Develop and manage Infrastructure as Code (IaC) using AWS CloudFormation, Terraform, or equivalent tools to automate infrastructure deployment. Client Collaboration: Work closely with stakeholders to understand business objectives and ensure solutions align with client needs. Team Leadership & Mentorship: Provide technical guidance to delivery teams through design reviews, troubleshooting, and strategic planning. Continuous Innovation: Stay current with AWS service updates, industry trends, and emerging technologies to enhance solution delivery. Documentation & Knowledge Sharing: Create and maintain architecture diagrams, SOPs, and internal/external documentation to support ongoing operations and collaboration. Qualifications 7+ years of hands-on experience in cloud architecture and infrastructure (preferably AWS). 3+ years of experience specifically in architecting and managing Data Lake or big datadata solutions on AWS. Bachelor’s Degree in Computer Science, Information Systems, or a related field (preferred) AWS Certifications such as Solutions Architect Professional or Big Data Specialty. Experience with Snowflake, Matillion, or Fivetran in hybrid cloud environments. Familiarity with Azure or GCP cloud platforms. Understanding of machine learning pipelines and workflows. Technical Skills: Expertise in AWS services such as EMR, Batch, SageMaker, Glue, Lambda,IAM, IoT TimeStream, DynamoDB, and more. Strong programming skills in Python for scripting and automation. Proficiency in SQL and performance tuning for data pipelines and queries. Experience with IaC tools like Terraform or CloudFormation. Knowledge of big data frameworks such as Apache Spark, Hadoop, or similar. Data Governance & Security: Proven ability to design and implement secure solutions, with strong knowledge of IAM policies and compliance standards. Problem-Solving: Analytical and problem-solving mindset to resolve complex technical challenges. Collaboration: Exceptional communication skills to engage with technical and non-technicalstakeholders. Ability to lead cross-functional teams and provide mentorship. Benefits: Health Insurance Paid leave Technical training and certifications Robust learning and development opportunities Incentive Toastmasters Food Program Fitness Program Referral Bonus Program Hakkoda is committed to fostering diversity, equity, and inclusion within our teams. A diverse workforce enhances our ability to serve clients and enriches our culture. We encourage candidates of all races, genders, sexual orientations, abilities, and experiences to apply, creating a workplace where everyone can succeed and thrive. Ready to take your career to the next level? \uD83D\uDE80 \uD83D\uDCBB Apply today\uD83D\uDC47 and join a team that’s shaping the future!! Hakkoda is an IBM subsidiary which has been acquired by IBM and will be integrated in the IBM organization. Hakkoda will be the hiring entity. By Proceeding with this application, you understand that Hakkoda will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here.
Posted 1 week ago
1.0 years
0 - 0 Lacs
India
On-site
Key Responsibilities: Design, deploy, and manage scalable infrastructure on AWS (EKS, ECS, EC2, S3, IAM, etc.) Maintain and optimize Kubernetes clusters and containerized workloads Implement and manage CI/CD pipelines to support rapid, secure, and stable deployments Ensure high system reliability through effective monitoring , logging , and alerting using Prometheus , Grafana , and AppDynamics (or similar APM tools) Lead and participate in incident response , root cause analysis, and implement long-term fixes Drive automation initiatives for infrastructure provisioning(TERRAFORM) and operations using scripting tools Continuously collaborate with development teams to implement SRE best practices and improve service availability and performance Document processes, runbooks, and contribute to a culture of knowledge sharing and operational excellence Key Requirements: Hands-on experience with AWS services : EKS, ECS, EC2, S3, IAM, etc. Strong expertise in Kubernetes and container orchestration Proficiency with observability tools like Prometheus , Grafana , and AppDynamics or equivalent Experience with CI/CD automation tools and workflows Solid understanding of infrastructure monitoring , alerting , and incident management Strong scripting skills in Bash or Python Excellent troubleshooting skills , proactive mindset, and ability to take full ownership Familiarity with SRE principles such as SLIs, SLOs, and error budgets is a plus Job Types: Full-time, Permanent Pay: ₹14,636.55 - ₹25,565.20 per month Schedule: Day shift Application Question(s): What DevOps tools you have used? What is muti-stage docker file? What Monitoring tools you have used? Experience: DevOps: 1 year (Required) Work Location: In person Speak with the employer +91 9460179593
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh
On-site
Strong proficiency in Java (8 or higher) and Spring Boot framework. Basic foundation on AWS services such as EC2, Lambda, API Gateway, S3, CloudFormation, DynamoDB, RDS. Experience developing microservices and RESTful APIs. Understanding of cloud architecture and deployment strategies. Familiarity with CI/CD pipelines and tools such as Jenkins, GitHub Actions, or AWS CodePipeline. Experience with monitoring/logging tools like CloudWatch, ELK Stack, or Prometheus is desirable. Familiarity with security best practices for cloud-native apps (IAM roles, encryption, etc.). About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Experience Level: 4 to 6 years of relevant IT experience Job Overview: We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems. Key Responsibilities: ● Design, develop, test, and maintain scalable ETL data pipelines using Python. ● Work extensively on Google Cloud Platform (GCP) services such as: ○ Dataflow for real-time and batch data processing ○ Cloud Functions for lightweight serverless compute ○ BigQuery for data warehousing and analytics ○ Cloud Composer for orchestration of data workflows (based on Apache Airflow) ○ Google Cloud Storage (GCS) for managing data at scale ○ IAM for access control and security ○ Cloud Run for containerized applications ● Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery. ● Implement and enforce data quality checks, validation rules, and monitoring. ● Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions. ● Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects. ● Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL. ● Document pipeline designs, data flow diagrams, and operational support procedures. Required Skills: ● 4–6 years of hands-on experience in Python for backend or data engineering projects. ● Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.). ● Solid understanding of data pipeline architecture, data integration, and transformation techniques. ● Experience in working with version control systems like GitHub and knowledge of CI/CD practices. ● Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.). Good to Have (Optional Skills): ● Experience working with Snowflake cloud data platform. ● Hands-on knowledge of Databricks for big data processing and analytics. ● Familiarity with Azure Data Factory (ADF) and other Azure data engineering tools. Additional Details: ● Excellent problem-solving and analytical skills. ● Strong communication skills and ability to collaborate in a team environment. Show more Show less
Posted 1 week ago
8.0 - 13.0 years
25 - 35 Lacs
Bengaluru
Work from Office
Endpoint & Network Security: Leverage CrowdStrike, XDR, and Zscaler for endpoint and network protection. Email & API Security: Manage and secure email platforms using Proofpoint and safeguard API security with WAF solutions.
Posted 1 week ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description As an Identity and Security Engineer , you will play a pivotal role in safeguarding digital identities across a complex, hybrid enterprise environment. You will be responsible for designing, implementing, and managing scalable IAM solutions that ensure secure and seamless access for users, applications, and services. This hands-on engineering role requires deep expertise in identity protocols, cloud IAM, and security automation. You will collaborate with cross-functional teams including DevOps, infrastructure, application development, and compliance to embed identity as a key component of the security architecture. Key Responsibilities Identity Architecture & Engineering Design and implement scalable IAM solutions, including SSO, MFA, and RBAC. Manage identity lifecycle processes: onboarding, offboarding, access reviews, and recertifications. Integrate IAM systems with enterprise applications, cloud platforms (Azure AD, AWS IAM), and third-party tools. Security Operations & Automation Develop automation scripts for identity provisioning and access governance. Deploy and manage Privileged Access Management (PAM) solutions to secure administrative access. Support Zero Trust Architecture by enforcing least privilege access across all environments. Monitoring, Detection & Incident Response Monitor identity-related events using SIEM and analytics tools. Investigate and respond to access violations and identity-based security incidents. Conduct root cause analysis and implement preventive controls. Compliance & Governance Ensure compliance with standards such as GDPR, PCI-DSS, ISO 27001. Maintain audit trails, access logs, and documentation to support internal/external audits. Contribute to policy development, risk assessments, and awareness programs. Collaboration & Continuous Improvement Work with DevOps and IT teams to embed IAM into CI/CD pipelines and cloud-native environments. Mentor junior engineers and promote IAM best practices across teams. Stay updated on identity trends, technologies, and evolving threat landscapes. Required Qualifications Minimum 5 years of experience in IAM or security engineering roles. Strong understanding of IAM protocols (SAML, OAuth2, OpenID Connect, LDAP, SCIM). Hands-on experience with Azure AD, Active Directory, AWS IAM/GCP IAM. Experience with PAM tools such as CyberArk, BeyondTrust, or HashiCorp Vault. Proficient in scripting languages (PowerShell, Python, or equivalent). Strong grasp of Zero Trust principles and identity governance frameworks. Preferred Qualifications Relevant certifications (Microsoft Identity and Access Administrator, CISSP, Azure Security Engineer, etc.). Experience in enterprise or retail environments at scale. Familiarity with Just-In-Time (JIT) access, identity analytics, and behavioral monitoring. Exposure to DevSecOps and CI/CD pipeline security integration. Skills Identity Access Management,Information Security,Security Monitoring Show more Show less
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Design, implement, and maintain CI/CD pipelines using Jenkins to support automated builds, testing, and deployments. Manage and optimize AWS infrastructure for scalability, reliability, and cost-effectiveness. To streamline operational workflows and develop automation scripts and tools using shell scripting and other programming languages. Collaborate with cross-functional teams (Development, QA, Operations) to ensure seamless software delivery and deployment. Monitor and troubleshoot infrastructure, build failures, and deployment issues to ensure high availability and performance. Implement and maintain robust configuration management practices and infrastructure-as-code principles. Document processes, systems, and configurations to ensure knowledge sharing and maintain operational consistency. Performing ongoing maintenance and upgrades (Production & : Experience : 5-8 years in DevOps or a similar role. Cloud Expertise : Proficient in AWS services such as EC2, S3, RDS, Lambda, IAM, CloudFormation, or similar. CI/CD Tools : Hands-on experience with Jenkins pipelines (declarative and scripted). Scripting Skills : Proficiency in either shell scripting or powershell. Programming Knowledge : Familiarity with at least one programming language (e. Python, Java, or Go). Scripting/Programming is integral to this role and will be a key focus in the interview process. Version Control : Experience with Git and Git-based workflows. Monitoring Tools : Familiarity with tools like CloudWatch, Prometheus, or similar. Problem-solving : Strong analytical and troubleshooting skills in a fast-paced environment. CDK Knowledge in AWS DevOps. Tools : Experience with Terraform and Kubernetes. (ref:hirist.tech) Show more Show less
Posted 1 week ago
12.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Skills : AWS Lead / Solution Architect Experience : 12 - 22 Years Location : Kolkata Job Summary: • 15+ years of hands on IT experience in design and development of complex system • Minimum of 5+ years in a solution or technical architect role using service and hosting solutions such as private/public cloud IaaS, PaaS and SaaS platforms • At least 4+ years of experience hands on experience in cloud native architecture design, implementation of distributed, fault tolerant enterprise applications for Cloud. • Experience in application migration to AWS cloud using Refactoring, Rearchitecting and Re-platforming approach • 3+ Proven experience using AWS services in architecting PaaS solutions. • AWS Certified Architect Technical Skills • Deep understanding of Cloud Native and Microservices fundamentals • Deep understanding of Gen AI usage and LLM Models, Hands on experience creating Agentic Flows using AWS Bedrock, Hands on experience using Amazon Q for Dev/Transform • Deep knowledge and understanding of AWS PaaS and IaaS features • Hands on experience in AWS services i.e. EC2, ECS, S3, Aurora DB, DynamoDB, Lambda, SQS, SNS, RDS, API gateway, VPC, Route 53, Kinesis, cloud front, Cloud Watch, AWS SDK/CLI etc. • Strong experience in designing and implementing core services like VPC, S3, EC2, RDS, IAM, Route 53, Autoscaling , Cloudwatch, AWS Config, Cloudtrail, ELB, AWS Migration services, ELB, VPN/Direct connect • Hands on experience in enabling Cloud PaaS app and data services like Lambda, RDS, SQS, MQ,, Step Functions, App flow, SNS, EMR, Kinesis, Redshift, Elastic Search and others • Experience automation and provisioning of cloud environments using API’s, CLI and scripts. • Experience in deploy, manage and scale applications using Cloud Formation/ AWS CLI • Good understanding of AWS Security best practices and Well Architecture Framework • Good knowledge on migrating on premise applications to AWS IaaS • Good knowledge of AWS IaaS (AMI, Pricing Model, VPC, Subnets etc.) • Good to have experience in Cloud Data processing and migration, advanced analytics AWS Redshift, Glue, AWS EMR, AWS Kinesis, Step functions • Creating, deploying, configuring and scaling applications on AWS PaaS • Experience in java programming languages Spring, Spring boot, Spring MVC, Spring Security and multi-threading programming • Experience in working with hibernate or other ORM technologies along with JPA • Experience in working on modern web technologies such as Angular, Bootstrap, HTML5, CSS3, React • Experience in modernization of legacy applications to modern java applications • Experience in DevOps tool Jenkins/Bamboo, Git, Maven/Gradle, Jira, SonarQube, Junit, Selenium, Automated deployments and containerization • Knowledge on relational database and no SQL databases i.e. MongoDB, Cassandra etc. • Hands on experience with Linux operating system • Experience in full life-cycle agile software development • Strong analytical & troubleshooting skills • Experienced in Python, Node and Express JS (Optional) Main Duties: • AWS architect takes company’s business strategy and outlines the technology systems architecture that will be needed to support that strategy. • Responsible for analysis, evaluation and development of enterprise long term cloud strategic and operating plans to ensure that the EA objectives are consistent with the enterprise’s long-term business objectives. • Responsible for the development of architecture blueprints for related systems • Responsible for recommendation on Cloud architecture strategies, processes and methodologies. • Involved in design and implementation of best fit solution with respect to Azure and multi-cloud ecosystem • Recommends and participates in activities related to the design, development and maintenance of the Enterprise Architecture (EA). • Conducts and/or actively participates in meetings related to the designated project/s • Participate in Client pursuits and be responsible for technical solution • Shares best practices, lessons learned and constantly updates the technical system architecture requirements based on changing technologies, and knowledge related to recent, current and upcoming vendor products and solutions. • Collaborates with all relevant parties in order to review the objectives and constraints of each solution and determine conformance with the EA. Recommends the most suitable technical architecture and defines the solution at a high level. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2