Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
kochi, kerala
On-site
The Max Maintenance team is currently in search of an experienced Principal Software Architect to take charge of leading the modernization and cloud transformation of a legacy .NET web application with a SQL Server backend. This role necessitates a profound understanding of AWS cloud services, including API Gateway, AWS Lambda, Step Functions, DynamoDB, and Neptune, in order to re-architect the system into a scalable, serverless, event-driven platform. The ideal candidate for this position will possess a robust architectural vision, hands-on technical proficiency, and a dedication to mentoring and guiding development teams through digital transformation initiatives. Are you someone who thrives in a fast-paced and dynamic team environment If so, we invite you to join our diverse and motivated team. Key Responsibilities: - Lead the comprehensive cloud transformation strategy for a legacy .NET/SQL Server web application. - Develop and deploy scalable, secure, and serverless AWS-native architectures with services like API Gateway, AWS Lambda, Step Functions, DynamoDB, and Neptune. - Establish and execute data migration plans, transitioning relational data models into NoSQL (DynamoDB) and graph-based (Neptune) storage paradigms. - Set standards for infrastructure-as-code, CI/CD pipelines, and monitoring utilizing AWS CloudFormation, CDK, or Terraform. - Offer hands-on technical guidance to development teams, ensuring high code quality and compliance with cloud-native principles. - Assist teams in adopting cloud technologies, service decomposition, and event-driven design patterns. - Mentor engineers in AWS technologies, microservices architecture, and best practices in DevOps and modern software engineering. - Develop and evaluate code for critical services, APIs, and data access layers using appropriate languages (e.g., Python, Node.js). - Create and implement APIs for both internal and external consumers, ensuring secure and dependable integrations. - Conduct architecture reviews, threat modeling, and enforce strict testing practices, including automated unit, integration, and load testing. - Collaborate closely with stakeholders, project managers, and cross-functional teams to define technical requirements and delivery milestones. - Translate business objectives into technical roadmaps and prioritize technical debt reduction and performance enhancements. - Engage stakeholders to manage expectations and provide clear communication on technical progress and risks. - Stay informed about AWS ecosystem updates, architectural trends, and emerging technologies. - Assess and prototype new tools, services, or architectural approaches that can expedite delivery and decrease operational complexity. - Advocate for a DevOps culture emphasizing continuous delivery, observability, and security-first development. Requirements: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Minimum of 8 years of software development experience, with at least 3 years focused on architecting cloud-native solutions on AWS. - Proficiency in AWS services like API Gateway, Lambda, Step Functions, DynamoDB, Neptune, IAM, CloudWatch. - Experience in legacy application modernization and cloud migration. - Strong familiarity with the .NET stack and the ability to map legacy components to cloud-native equivalents. - Extensive knowledge of distributed systems, serverless design, data modeling (both relational and NoSQL/graph), and security best practices. - Demonstrated leadership and mentoring skills within agile software teams. - Exceptional problem-solving, analytical, and decision-making capabilities. The oil and gas industry's top professionals leverage over 150 years of combined experience every day to assist customers in achieving enduring success. We Power the Industry that Powers the World Our family of companies has delivered technical expertise, cutting-edge equipment, and operational assistance across every region and aspect of drilling and production, ensuring current and future success. Global Family We operate as a unified global family, comprising thousands of individuals working together to make a lasting impact on ourselves, our customers, and the communities we serve. Purposeful Innovation Through intentional business innovation, product development, and service delivery, we are committed to enhancing the industry that powers the world. Service Above All Our commitment to anticipating and meeting customer needs drives us to deliver superior products and services promptly and within budget.,
Posted 3 days ago
140.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About NCR Voyix Off Campus Drive 2025 NCR VOYIX Corporation (NYSE: VYX) is a leading global provider of digital commerce solutions for the retail, restaurant and banking industries. NCR VOYIX is headquartered in Atlanta, Georgia, with approximately 16,000 employees in 35 countries across the globe. For nearly 140 years, we have been the global leader in consumer transaction technologies, turning everyday consumer interactions into meaningful moments. Candidates Also Search: Software Engineer Jobs NCR Off Campus Drive 2025 Details Company Name NCR Voyix Corporation Job Role App Dev Engineer I Job Type Full Time Job Location Gurugram Education BE/ B.Tech Career Level 0 – 1 Years Salary Not Mentioned Company Website www.ncrvoyix.com Job Description For NCR Voyix Off Campus Drive 2025 Candidates Also Search: Fresher Jobs Strong knowledge and Experience of Java/J2EE, Spring, Quarkus. Hands on Experience in ECommerce Solutions which includes one of the eCommerce Platform like Saleforce, ATG, CommerceTools, Magento. Hands on experience in RDBMS databases Oracle Database 19c and above, Microsoft SQL Server along with strong PL/SQL knowledge Hands on experience and understanding of Application Servers ( Weblogic, WebSphere, Apache Tomcat) Hands on experience on HTML, JS/jQuery, CSS, React, NodeJS and other web technologies. Hands on Experience and Strong understanding on WebServices ( Rest and SOAP ) integrations. Hands on Experience and knowledge of GitHub (i.e., Version Control System) Good Knowledge and Experience in managing application on Linux Servers. Hands on experience Google Analytics, Google Captcha Hands on experience on any of the APM ( Application Performance monitoring tools). Knowlege of Cloudflare or any Cloud solution for DDoS and WAF will be a plus. Knowlege of Reverse Proxy/Content Re-writer like BigIP F5, Nginx, TrafficIO will be a plus. Strong knowledge and experience in working in Agile Development/ Environment. Ability to handle integration challenges with third-party programs/applications. Hands on experience in managing Cloud infrastructure(Azure ) including Infrastructre as Code (IaC) such as the Terraform and CloudFormation. Knowledge of Cloud infrastructure best practices Strong knowlege of Architecture Principles, Networking and Security. Candidates Also Search: BE/ B.Tech Jobs NCR Voyix Off Campus Drive 2025 Application Process DOUBLE CLICK TO APPLY ONLINE ! We wish you the best of luck in your NCR Voyix Off Campus Drive 2025 . May your talents shine, and may you find the perfect opportunity that not only meets your professional goals but also brings joy to your everyday work.
Posted 3 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Overview TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Specialty Development Practitioner Location: Chennai Work Type: Hybrid Position Description At the client's Credit Company, we are modernizing our enterprise data warehouse in Google Cloud to enhance data, analytics, and AI/ML capabilities, improve customer experience, ensure regulatory compliance, and boost operational efficiencies. As a GCP Data Engineer, you will integrate data from various sources into novel data products. You will build upon existing analytical data, including merging historical data from legacy platforms with data ingested from new platforms. You will also analyze and manipulate large datasets, activating data assets to enable enterprise platforms and analytics within GCP. You will design and implement the transformation and modernization on GCP, creating scalable data pipelines that land data from source applications, integrate into subject areas, and build data marts and products for analytics solutions. You will also conduct deep-dive analysis of Current State Receivables and Originations data in our data warehouse, performing impact analysis related to the client's Credit North America's modernization and providing implementation solutions. Moreover, you will partner closely with our AI, data science, and product teams, developing creative solutions that build the future for the client's Credit. Experience with large-scale solutions and operationalizing data warehouses, data lakes, and analytics platforms on Google Cloud Platform or other cloud environments is a must. We are looking for candidates with a broad set of analytical and technology skills across these areas and who can demonstrate an ability to design the right solutions with the appropriate combination of GCP and 3rd party technologies for deployment on Google Cloud Platform. Skills Required Big Query,, Data Flow, DataForm, Data Fusion, Dataproc, Cloud Composer, AIRFLOW, Cloud SQL, Compute Engine, Google Cloud Platform - Biq Query Experience Required GCP Data Engineer Certified Successfully designed and implemented data warehouses and ETL processes for over five years, delivering high-quality data solutions. 5+ years of complex SQL development experience 2+ experience with programming languages such as Python, Java, or Apache Beam. Experienced cloud engineer with 3+ years of GCP expertise, specializing in managing cloud infrastructure and applications into production-scale solutions. Big Query,, Data Flow, DataForm, Data Fusion, Dataproc, Cloud Composer, AIRFLOW, Cloud SQL, Compute Engine, Google Cloud Platform Biq Query, Data Flow, Dataproc, Data Fusion, TERRAFORM, Tekton,Cloud SQL, AIRFLOW, POSTGRES, Airflow PySpark, Python, API, cloudbuild, App Engine, Apache Kafka, Pub/Sub, AI/ML, Kubernetes Experience Preferred In-depth understanding of GCP's underlying architecture and hands-on experience of crucial GCP services, especially those related to data processing (Batch/Real Time) leveraging Terraform, Big Query, Dataflow, Pub/Sub, Data form, astronomer, Data Fusion, DataProc, Pyspark, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Cloud build and App Engine, alongside and storage including Cloud Storage DevOps tools such as Tekton, GitHub, Terraform, Docker. Expert in designing, optimizing, and troubleshooting complex data pipelines. Experience developing with microservice architecture from container orchestration framework. Experience in designing pipelines and architectures for data processing. Passion and self-motivation to develop/experiment/implement state-of-the-art data engineering methods/techniques. Self-directed, work independently with minimal supervision, and adapts to ambiguous environments. Evidence of a proactive problem-solving mindset and willingness to take the initiative. Strong prioritization, collaboration & coordination skills, and ability to simplify and communicate complex ideas with cross-functional teams and all levels of management. Proven ability to juggle multiple responsibilities and competing demands while maintaining a high level of productivity. Data engineering or development experience gained in a regulated financial environment. Experience in coaching and mentoring Data Engineers Project management tools like Atlassian JIRA Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Experience with data security, governance, and compliance best practices in the cloud. Experience with AI solutions or platforms that support AI solutions Experience using data science concepts on production datasets to generate insights Experience Range 5+ years Education Required Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity.
Posted 3 days ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Overview TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Software Engineer Consultant/Expert Location: Chennai Work Type: Hybrid Position Description Software Engineer will work on a Balanced Product Team and collaborate with the Product Manager, Product Designer, and other Software Engineers to deliver analytic solutions. The Software Engineer will be responsible for the development and ongoing support/maintenance of the analytic solutions. Product And Requirements Management: Participate in and/or lead the development of requirements, features, user stories, use cases, and test cases. Participate in stand-up operations meetings. Author: Process and Design Documents Design/Develop/Test/Deploy: Work with the Business Customer, Product Owner, Architects, Product Designer, Software Engineers, and Security Controls Champion on solution design, development, and deployment. Operations: Generate Metrics, Perform User Access Authorization, Perform Password Maintenance, and Build Deployment Pipelines. Incident, Problem, And Change/Service Requests: Participate and/or lead incident, problem, change and service request-related activities. Includes root cause analysis (RCA). Includes proactive problem management/defect prevention activities. Skills Required Full Stack Java Developer, Angular, GCP, React, Spring Boot, AWS Experience Required 8+ years experience in Software Engineering. Bachelor's degree in computer science, computer engineering or a combination of education and equivalent experience. 1+ year experience with developing for and deploying to GCP/AWS/Azure cloud platforms Experience In Development In Some From Each Following Categories Languages: Java, Python, Frontend frameworks: Angular / React / Dash Backend frameworks: Spring Boot / Node / Other Unit Test Frameworks: JUNIT, Karma Proven experience understanding, practicing, and advocating for software engineering disciplines from eXtreme Programming (XP), Clean Code, Software Craftmanship, and Lean including: Paired / Extreme programming Test-first/Test Driven Development (TDD) Evolutionary design o Minimum Viable Product FOSSA, SonarQube,42Crunch, Checkmarx etc., Willingness to collaborate daily with team members. A strong curiosity around how to best use technology to amaze and delight our customers Experience Preferred Highly effective in working with other technical experts, Product Managers, UI/UX Designers and business stakeholders Delivered products that include web front-end development; JavaScript, CSS, frameworks like Angular, Python etc. Comfortable with Continuous Integration/Continuous Delivery tools and pipelines e.g. Tekton, Terraform Jenkins, Cloud Build, etc. Experience with machine learning, mathematical modeling, LLM and data analysis is a plus Experience with CA Agile Central (Rally), JIRA, backlogs, iterations, user stories, or similar Agile Tools Experience in the development of microservices Understanding of fundamental data modeling Strong analytical and problem-solving skills Experience Range 8+ Years Education Required Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity.
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As an AI Ops Expert, you will be responsible for taking full ownership of deliverables with defined quality standards, timelines, and budget constraints. Your primary role will involve designing, implementing, and managing AIops solutions to automate and optimize AI/ML workflows. Collaborating with data scientists, engineers, and stakeholders is essential to ensure the seamless integration of AI/ML models into production environments. Your duties will also include monitoring and maintaining the health and performance of AI/ML systems, developing and maintaining CI/CD pipelines specifically tailored for AI/ML models, and implementing best practices for model versioning, testing, and deployment. In case of issues related to AI/ML infrastructure or workflows, you will troubleshoot and resolve them effectively. To excel in this role, you are expected to stay abreast of the latest AIops, MLOps, and Kubernetes tools and technologies. Your strong skills should include proficiency in Python with experience in Fast API, hands-on expertise in Docker and Kubernetes (or AKS), familiarity with MS Azure and its AI/ML services like Azure ML Flow, and the ability to use DevContainer for development purposes. Furthermore, you should possess knowledge of CI/CD tools such as Jenkins, Argo CD, Helm, GitHub Actions, or Azure DevOps, experience with containerization and orchestration tools like Docker and Kubernetes, proficiency in Infrastructure as code (Terraform or equivalent), familiarity with machine learning frameworks like TensorFlow, PyTorch, or scikit-learn, and exposure to data engineering tools such as Apache Kafka, Apache Spark, or similar technologies.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Developer at Virtusa, you will be responsible for designing, implementing, and maintaining cloud infrastructure using AWS Cloud Development Kit (CDK). Your role will involve developing and evolving Infrastructure as Code (IaC) to ensure efficient provisioning and management of AWS resources. You will also be involved in developing and automating Continuous Integration/Continuous Deployment (CI/CD) pipelines for infrastructure provisioning and application deployment. Your responsibilities will include configuring and managing various AWS services such as EC2, VPC, Security Group, NACL, S3, CloudFormation, CloudWatch, AWS Cognito, IAM, Transit Gateway, ELB, CloudFront, Route53, and more. Collaboration with development and operations teams will be essential to bridge the gap between infrastructure and application development. Monitoring and troubleshooting infrastructure performance issues to ensure high availability and reliability will also be part of your role. You will implement proactive measures to optimize resource utilization, identify potential bottlenecks, and adhere to security best practices, including data encryption and compliance with industry standards and regulations. The ideal candidate should have at least 5 years of hands-on experience in DevOps and infrastructure engineering, along with a solid understanding of AWS services and technologies. Proficiency in CI/CD tools, DevOps implementation, and HA/DR setup is required. Skills in AWS networking services, storage services, certificate management, secrets management, and database setup (RDS) are essential, as well as expertise in Terraform/Cloud Formation/AWS CDK. Strong scripting and programming skills in Python and Bash are also necessary for this role. Nice to have skills include expertise in AWS CDK and CDK Pipelines for IaC, and familiarity with logging and monitoring services like AWS CloudTrail, CloudWatch, GuardDuty, and other AWS security services. Excellent communication and collaboration skills are valued to work effectively in a team-oriented environment. At Virtusa, we embody values of teamwork, quality of life, and professional and personal development. Join our global team of 27,000 professionals who care about your growth and provide exciting projects, opportunities, and work with state-of-the-art technologies throughout your career. We value collaboration and seek to provide a dynamic environment to nurture new ideas and foster excellence.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
We have an exciting and rewarding opportunity for you to impact your career and provide an adventure where you can push the limits of what's possible. Our team focuses on applying GenAI, ML, and statistical models to solve business problems in the Global Wealth Management space. As a Lead Software Engineer - Infrastructure Cloud at JPMorgan Chase within the Asset and Wealth Management Technology Team, you will collaborate with development teams to enhance the developer experience, delivering end-to-end cutting-edge solutions in the form of cloud-native microservices architecture applications. You will be involved in the design and architecture of solutions, focusing on the entire SDLC lifecycle stages. Our team works in tribes and squads, allowing you to move between projects based on your strengths and interests, making a significant impact on our clients and business partners worldwide. Job responsibilities Collaborate with development teams to enhance the developer experience, providing tools and infrastructure that support agile methodologies and continuous integration/continuous deployment. Execute software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches. Create secure and high-quality production code to deploy infrastructure. Develop and maintain automated pipelines for model/product deployment, ensuring scalability, reliability, and efficiency. Produce architecture and design artifacts for complex applications, ensuring design constraints are met by software code development. Gather, analyze, synthesize, and develop visualizations and reporting from large, diverse sets for continuous improvement of software applications and systems. Identify hidden problems and patterns in data, using insights to drive improvements to coding hygiene and system architecture. Contribute to software engineering communities of practice and events that explore new and emerging technologies. Add to team culture of diversity, equity, inclusion, and respect. Required qualifications, capabilities, and skills Formal training or certification in software engineering concepts and 5+ years of applied experience. Hands-on practical experience in system design, application development, testing, and operational stability. Proficient with Public Cloud services in Production (AWS or other) with proficiency in Python scripting language. Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages (Python) and database querying language. Experience with Infrastructure as Code (Terraform or other). Experience in applied AI/ML engineering, with a track record of deploying business critical GenAI, machine learning models in production. Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, mobile, etc.). Ability to tackle design and functionality problems independently. Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security. Experience in architecting, supporting, and implementing advanced, strategic CICD migrations. Strong collaboration skills to work effectively with cross-functional teams, communicate complex concepts, and contribute to interdisciplinary projects. Preferred qualifications, capabilities, and skills Experience with any of GitHub, GitHub Actions, Artifactory, Terraform Cloud, Slack, Grafana, SonarQube is considered a plus. Experience in designing and implementing AI/ML/LLM/GenAI pipelines. Stay informed about the latest trends and advancements in the latest AI/ML/LLM/GenAI research, implement cutting-edge techniques, and leverage external APIs for enhanced functionality.,
Posted 3 days ago
10.0 - 14.0 years
0 Lacs
hyderabad, telangana
On-site
As the Lead Cloud Network Engineer at Assurant-GCC in India, you will report to the VP of Cloud Planning, Infrastructure, and Cloud Services (ICS). Your primary responsibility will be to lead the design and implementation of public cloud networking connectivity design and automation. Your extensive knowledge of various networking technologies, including SDN in public cloud, security, and automation, will be crucial for the successful delivery of the Enterprise Cloud as a service. You will be instrumental in implementing the design and build of public cloud connectivity from Assurant DCs and Internet, as well as customer locations for the Enterprise cloud Platform delivery. Additionally, you will oversee the design and delivery of all Network services within the public cloud and work on automation of these services in collaboration with automation and principal network Engineers. Your duties and responsibilities in this role will include designing, engineering, and building patterns and catalogs of Enterprise and Cloud Networking. This encompasses LAN, WAN, DNS, EVPN, VXLA, BGP-based DC Network designs with public cloud connectivity, maintaining Network engineering design, and providing network as a product and service. You will collaborate with the DevSecOps team to define requirements for catalog automation for network infrastructure and policy as code, ensuring alignment with enterprise security requirements. Furthermore, you will engage with stakeholders, actively performing research to stay updated on new public cloud network capabilities and their alignment with Assurant Enterprise Cloud strategy and opportunities. Your qualifications for this position include a minimum of 10 years of overall work experience, a Bachelor's degree in engineering or computer science, and at least 7 years of experience in network infrastructure engineering in senior roles. You should have proven experience in design and build of cloud-related networking solutions, enterprise experience in design and delivery of network services as Infrastructure as a code, and experience in designing and building data center network solutions for large-scale modern data centers, public, and private clouds. Additionally, experience with Ansible, Python, Terraform, and working with cross-functional enterprise Centers of Excellence will be beneficial. Assurant's culture values talent, innovation, service, and taking calculated risks. As an Assurant employee, you will be part of a global business services company that supports and connects major consumer purchases, offering innovative solutions and delivering an enhanced customer experience. If you are passionate about service, innovation, and collaboration, Assurant may be the place for you to thrive and grow in a connected world. For U.S. benefit information, visit myassurantbenefits.com. For benefit information outside the U.S., please speak with your recruiter.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
kolkata, west bengal
On-site
You are a passionate and customer-focused AWS Solutions Architect seeking to join Workmates, the fastest-growing partner to the world's major cloud provider, AWS. In this role, you will play a crucial part in driving innovation, creating differentiated solutions, and shaping new customer experiences. Collaborating with industry specialists and technology experts, you will help customers maximize the benefits of AWS in their cloud journey. By choosing Workmates and the AWS Practice, you will elevate your AWS expertise to new heights in an innovative and collaborative setting. Embrace the opportunity to lead the way in native cloud transformation with the leading partner in AWS growth worldwide. At Workmates, we value our people as our greatest assets and are committed to fostering a culture of excellence in cloud-native operations. Join us in our mission to drive innovation across Cloud Management, Media, DevOps, Automation, IoT, Security, and more. Be part of a team where independence and ownership are encouraged, allowing you to thrive authentically. Role Description: - Build and manage cloud infrastructure environments - Ensure availability, performance, security, and scalability of production systems - Collaborate with application teams to implement DevOps practices throughout the development lifecycle - Ability to develop solution prototypes and conduct proof of concepts for new tools - Design automated, repeatable, and scalable processes to enhance efficiency and software quality, including managing Infrastructure as Code and developing internal tooling to simplify workflows - Automate and streamline operations and processes - Troubleshoot and diagnose issues/outages, providing operational support - Engage in incident handling, promoting a culture of post-mortem analysis and knowledge sharing Requirements: - Minimum of 5 years of hands-on experience in building and supporting large-scale environments - Strong background in Architecting and Implementing AWS Cloud solutions - Proficiency in AWS CloudFormation and Terraform - Experience with Docker Containers, container environment build and deployment - Proficient in Kubernetes and EKS - Sysadmin and infrastructure expertise (Linux internals, filesystems, networking) - Skilled in scripting, particularly Bash scripting - Experience with code check-in, peer review, and collaboration within distributed teams - Hands-on experience in CI/CD pipeline setup and release - Strong familiarity with CI/CD tools such as Jenkins, GitLab, or TravisCI - Proficient in AWS Developer tools like AWS Code Pipeline, Code Build, Code Deploy, AWS Lambda, AWS Step Function, etc. - Experience with log management solutions (ELK/EFK or similar) - Proficiency in Configuration Management tools like Ansible or similar - Expertise in modern Monitoring and Alerting tools (CloudWatch, Prometheus, Grafana, Opsgenie, etc.) - Passion for automating tasks and troubleshooting production issues - Experience in automation testing, script generation, and integration with CI/CD - Skilled in AWS Security (IAM, Security Groups, KMS, etc.) - Must have CKA/CKAD/CKS Certifications and knowledge of Python/Go/Bash Good to have: - AWS Professional Certifications - Experience with Service Mesh and Distributed tracing - Knowledge of Scrum/Agile methodology Choose Workmates to advance your career and be part of a team dedicated to delivering innovative solutions in a dynamic and supportive environment. Join us in shaping the future of cloud technology and making a meaningful impact on the industry.,
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As part of Continental's digital capabilities growth, the global Cloud Services team is leading the transformation by establishing scalable, secure, and efficient cloud platforms for innovative solutions worldwide. We are currently looking for an experienced Cloud Platform Architect to take charge of designing, implementing, and governing our enterprise Azure landing zone. In this role, you will have the opportunity to set cloud architecture standards, develop secure and compliant infrastructure, and lay the groundwork for our development teams to deliver solutions efficiently and reliably. Your responsibilities will include: - Designing and leading the architectural vision for Continental's Azure cloud landing zone, aligning with Microsoft's recommended practices and Azure roadmap - Enhancing strategic cloud platform features in security, compliance, connectivity, and governance while overseeing the entire Azure landing zone infrastructure - Implementing cloud security controls across the enterprise, including automated compliance monitoring and vulnerability remediation processes - Developing Infrastructure as Code (IaC) solutions for automated provisioning, configuration, and management of Azure cloud landing zone - Collaborating with internal stakeholders to provide secure access to Azure IaaS and PaaS offerings - Troubleshooting platform issues and enhancing service documentation To be considered for this role, you should have: - An academic degree in computer science or a related technical field - Minimum of 3 years of experience in architecting cloud platforms (especially Azure) in enterprise settings - Proficiency in DevOps methodologies, CI/CD implementation, version control systems, and automation tools in cloud environments - Knowledge of high-level scripting language and Infrastructure as Code tooling - Basic understanding of IT service management - Strong technical leadership, excellent communication skills, and ability to work in a dynamic environment In return, Continental offers: - Opportunities for career growth and development within the organization - Hybrid work arrangement for better work-life balance - Global exposure working with a diverse team - Attractive employee benefit policies such as parental leave, wellness programs, and more If you are ready to drive innovation with Continental and share our core values of Trust For One Another, Passion to Win, and Freedom to Act, we welcome your application. Join us in shaping the future of sustainable and connected mobility. Apply now to be part of Continental's global IT team and contribute to our mission of providing safe, efficient, and affordable solutions for vehicles, machines, and transportation.,
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
As a Cloud DevOps Engineer at Amdocs, your role will involve design, development, modification, debugging, and maintenance of software systems. You will be responsible for various tasks such as monitoring, triaging, root cause analysis, and reporting production incidents. Additionally, you will investigate issues reported by clients, manage Reportdb servers, and handle access management for new users on Reportdb. Collaborating with the stability team to enhance Watch tower alerts, working with cronjobs and scripts for dumping and restoring from ProdDb, and performing non-prod deployments in Azure DevOps will also be part of your responsibilities. Creating Kibana dashboards will be another key aspect of your job. To excel in this role, you must possess technical skills such as experience in AWS DevOps, EKS, EMR, strong knowledge of Docker and Dockerhub, proficiency in Terraform and Ansible, and good exposure to Git and Bitbucket. Knowledge and experience in Kubernetes, Docker, and cloud experience working with VMS and Azure storage will be beneficial. Sound data engineering experience is also preferred. In addition to technical skills, you are expected to have strong problem-solving abilities, effective communication with clients and operational managers, and the capacity to build and maintain good relationships with colleagues. Being adaptable, able to prioritize tasks, work under pressure, and meet deadlines are essential. Anticipating problems, demonstrating an innovative approach, and possessing good presentation skills are qualities that will contribute to your success in this role. A willingness to work in shifts and extended hours is required. In this position, you will have the opportunity to design and develop new software applications and work in a dynamic environment that offers personal growth opportunities. If you are looking for a challenging role where you can contribute to innovative projects and be part of a growing organization, this job is perfect for you.,
Posted 3 days ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
As a Full Stack Data Engineer Lead Analyst at Evernorth, you will be a key player in the Data & Analytics Engineering organization of Cigna, a leading Health Services company. Your role will involve delivering business needs by understanding requirements and deploying software into production. To excel in this position, you should be well-versed in critical technologies, eager to learn, and committed to adding value to the business. Ownership, a thirst for knowledge, and an open mindset are essential attributes for a successful Full Stack Engineer like yourself. In addition to delivery responsibilities, you will be expected to embrace an automation-first and continuous improvement mindset. You will drive the adoption of CI/CD tools and support the enhancement of toolsets and processes. Your ability to articulate clear business objectives aligned with technical specifications and work in an iterative, agile manner will be crucial. Taking ownership and being accountable, writing referenceable and modular code, and ensuring data quality are key behaviors expected from you. Key Characteristics: - Independently design and architect solutions - Demonstrate ownership and accountability - Write referenceable and modular code - Possess fluency in specific areas and proficiency in multiple areas - Exhibit a passion for continuous learning - Maintain a quality mindset to ensure data quality and business impact assessment Required Skills: - Experience in developing data integration and ingestion strategies, including Snowflake cloud data warehouse, AWS S3 buckets, and loading nested JSON formatted data - Strong understanding of snowflake cloud database architecture - Proficiency in big data technologies like Databricks, Hadoop, HiveQL, Spark (Scala/Python) and cloud technologies such as AWS (S3, Glue, Terraform, Lambda, Aurora, Redshift, EMR) - Experience in working on Analytical Models and enabling their deployment and production via data and analytical pipelines - Expertise in Query Tuning and Performance improvements - Previous exposure to onsite/offshore setup or model Required Experience & Education: - 8+ years of professional industry experience - Bachelor's degree (or equivalent) - 5+ years of Python scripting experience - 5+ years of Data Management and SQL expertise in Teradata & Snowflake - 3+ years of Agile team experience, preferably with Scrum Desired Experience: - Familiarity with version management tools, with Git being preferred - Exposure to BDD and TDD development methodologies - Experience in an agile CI/CD environment; Jenkins experience is preferred - Knowledge of Health care information domains is advantageous Location & Hours of Work: - (Specify whether the position is remote, hybrid, in-office, and where the role is located as well as the required hours of work) Evernorth is committed to being an Equal Opportunity Employer, actively promoting and supporting diversity, equity, and inclusion efforts throughout the organization. Staff are encouraged to participate in these initiatives to enhance internal practices and external collaborations with diverse client populations.,
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III - SRE, AWS, Python, Golang at JPMorgan Chase within the Commercial & Investment Bank - FICC (Fixed Income, Currencies, and Commodities) eTrading and Analytics platforms Team, you will play a crucial role in an agile team dedicated to creating and deploying innovative technology products that emphasize security, stability, and scalability. Your responsibilities will include developing vital technology solutions across various technical areas to support the firm's strategic goals. You will be involved in designing and implementing cloud-based, multi-region engineering solutions to improve global eTrading and data analytics platforms. Contribute to the design and implementation of AWS large-scale distributed systems capturing billions of real-time messages daily and computing eTrading analytics and signals. Automate provisioning of AWS multi-region network, compute and storage services that non-disruptively support eTrading activities 24x7. Enforce quantitative driven approach to proactively monitor performance, capacity, latency and cost of AWS-hosted solutions during routine and extra-volatile market events. Provide cross-functional leadership in building microservice-driven API to tech teams producing and consuming, on-prem or cloud-natively, the AWS-stored data. Pilot emerging technologies that help quants, traders and sales explore petabytes of stored data. Required Qualifications, Capabilities, And Skills - Formal training or certification on software engineering concepts and 3+ years applied experience - Hands-on experience in design, deployment and performance optimization of large Kubernetes clusters using AWS EKS, including networking load balancing, ingress controllers, node group management and workload orchestration. - Experience in optimizing FSx for Lustre and EBS performance for latency-sensitive applications and managing storage costs. - Strong problem-solving skills with the ability to troubleshoot complex distributed systems with relentless focus on resiliency improvement. - Kubernetes Application Developer (CKAD) certification - Experience in scripting languages like Bash, Python, or Go for automation of CI/CD and SRE tasks. Preferred Qualifications, Capabilities, And Skills - Proficiency in infrastructure as code tools like Terraform. - Experience with vector or timeseries databases such as KDB. - Familiarity with building Datadog/Dynatrace/Geneos real-time dashboards - Experience with managing SageMaker workflows. ABOUT US,
Posted 3 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Nice to meet you! We’re a leader in data and AI. Through our software and services, we inspire customers around the world to transform data into intelligence - and questions into answers. We’re also a debt-free multi-billion-dollar organization on our path to IPO-readiness. If you're looking for a dynamic, fulfilling career coupled with flexibility and world-class employee experience, you'll find it here. What You’ll Do SAS CloudOps Platform division is an innovation team that applies the latest technologies to continuously improve customer experience and optimize operational efficiencies. Do you seek out innovative technologies and find the best way to apply them? Do you enjoy the challenges of integration of complex workflows while simplifying the customer experience? Do you find yourself championing solving problems through code? If so, the SAS CloudOps Platform team would like to talk to you. As a DevOps Engineer, you will have the opportunity join a team of engineers that are developing a new cloud-native platform to deliver self-service analytics to SAS customers. You will help guide the architecture and implement complex orchestration across global cloud platforms. A strong aptitude for continuous learning and rapidly implementing modern technologies is required. Additionally, this role requires a holistic view of architecture, applied development, operations, performance, security, and costs. You Will Collaborate with a team of engineers and developers across multiple divisions to build and operate the SAS CloudOps platform architecture, components, and APIs. Build automation workflows to deliver internal and external customer services. Utilize containerization with technologies such as Docker and Kubernetes Implement DevOps engineering best practices to deliver production services via self-service portals. Improve automation framework and take ownership of continuous improvements for all services delivered via DevOps pipelines. Utilize a diverse set of architecture, engineering, and programming skills to build and manage integrated automated systems. Create and maintain current solution and architecture documentation. Required What we are looking for: Experience with modern technologies such as Docker, Kubernetes, Azure Cloud, Ansible, GitHub Cloud and Terraform. Experience in automation integrations and leveraging Cloud services in one or more of the cloud providers including AWS, Azure or GCP. Experience with continuous integration and deployment methodologies using pipelines such as GitHub Actions, GitLab pipelines, Azure DevOps or Jenkins Experience with Cloud Networking and Security troubleshooting. Experience developing, deploying, and supporting Kubernetes orchestration. The Nice To Haves Experience with API development and integration. Experience with YAML or other markup languages to manage deployments as code. Experience managing multiple complex workstreams concurrently. Experience with Jira, Confluence, ServiceNow or other issue tracking software. Experience developing cloud native solutions with security-by-design. Proficiency reading and debugging code in Open-Source projects.
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
ahmedabad, gujarat
On-site
You will be responsible for designing, developing, and maintaining scalable data pipelines using Azure Databricks. Your role will involve building and optimizing ETL/ELT processes for structured and unstructured data, collaborating with data scientists, analysts, and business stakeholders, integrating Databricks with Azure Data Lake, Synapse, Data Factory, and Blob Storage, developing real-time data streaming pipelines, and managing data models/data warehouses. Additionally, you will optimize performance, manage resources, ensure cost efficiency, implement best practices for data governance, security, and quality, troubleshoot and improve existing data workflows, contribute to architecture and technology strategy, mentor junior team members, and maintain documentation. To excel in this role, you should have a Bachelor's/Master's degree in Computer Science, IT, or a related field, along with 5+ years of Data Engineering experience (minimum 2+ years with Databricks). Strong expertise in Azure cloud services (Data Lake, Synapse, Data Factory), proficiency in Spark (PySpark/Scala) and big data processing, experience with Delta Lake, Structured Streaming, and real-time pipelines, strong SQL skills, an understanding of data modeling and warehousing, familiarity with DevOps tools like CI/CD, Git, Terraform, Azure DevOps, excellent problem-solving and communication skills are essential. Preferred qualifications include Databricks Certified (Associate/Professional), experience with machine learning workflows on Databricks, knowledge of data governance tools like Purview, experience with REST APIs, Kafka, Event Hubs, cloud performance tuning, and cost optimization experience. Join us to be a part of a supportive and collaborative team, work with a growing company in the exciting BI and Data industry, enjoy a competitive salary and performance-based bonuses, and have opportunities for professional growth and development. If you are interested in this opportunity, please send your resume to hr@exillar.com and fill out the form at https://forms.office.com/r/HdzMNTaagw.,
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
NTT DATA is looking for an AWS Devops Engineer to join their team in Pune, Maharashtra (IN-MH), India. As an AWS Devops Engineer, you will be responsible for building and maintaining a robust, scalable real-time data streaming platform leveraging AWS and Confluent Cloud Infrastructure. Your key responsibilities will include developing and building the platform, monitoring performance, collaborating with cross-functional teams, managing code using Git, applying Infrastructure as Code principles using Terraform, and implementing CI/CD practices using GitHub Actions. The ideal candidate must have a strong proficiency in AWS services such as IAM Roles, Access Control RBAC, S3, Lambda Functions, VPC, Security Groups, RDS, CloudWatch, and more. Hands-on experience in Kubernetes (EKS) and expertise in managing resources/services like Pods, Deployments, and Helm Charts is required. Additionally, expertise in Datadog, Docker, Python, Go, Git, Terraform, and CI/CD tools is essential. Understanding of security best practices and familiarity with tools like Snyk, Sonar Cloud, and Code Scene is also necessary. Nice-to-have skills include prior experience with streaming platforms like Apache Kafka, knowledge of unit testing around Kafka topics, and experience with Splunk integration for logging and monitoring. Familiarity with Software Development Life Cycle (SDLC) principles is a plus. NTT DATA is a trusted global innovator of business and technology services, serving 75% of the Fortune Global 100. They are committed to helping clients innovate, optimize, and transform for long-term success. NTT DATA offers diverse experts in more than 50 countries and a robust partner ecosystem. Their services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is dedicated to providing digital and AI infrastructure and is part of the NTT Group, investing significantly in R&D to support organizations and society in moving confidently into the digital future. Visit us at us.nttdata.com.,
Posted 3 days ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
As a Devops Engineer/Infrastructure Engineer with 7-10 years of experience, you will be part of a dynamic team at FIS, working on cutting-edge financial services and technology solutions. Your role will involve deploying Java-based applications, troubleshooting issues, and utilizing industry-leading technologies to support FIS IT technologies. You will need to demonstrate proficient analytical and troubleshooting skills to creatively resolve problems. Additionally, you should have knowledge of change and release management principles, procedures, and techniques such as SDLC, ITIL, CI/CD, and DevOps. Experience in infrastructure and DevOps, along with working knowledge of Azure DevOps, Openshift, Terraform with Azure Kubernetes Services, and various operating systems, is essential for this role. Your responsibilities will include Java-based application deployment, managing maven dependencies, and ensuring smooth operations. You will work with technologies like Docker, GIT, Jenkins, Kubernetes, and the Atlassian Suite. Your expertise in change and release management practices will be crucial in this role. At FIS, you will have the opportunity to work in a collaborative and innovative environment, with a high degree of responsibility and numerous career development opportunities. We offer a competitive salary, comprehensive benefits, and a supportive work culture that values personal and professional growth. FIS is dedicated to safeguarding the privacy and security of personal information processed to deliver services to clients. Our recruitment process primarily relies on direct sourcing, and we do not accept resumes from recruitment agencies not on our preferred supplier list. Join us at FIS and be a part of a team that values diversity, innovation, and continuous learning.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You are a Full-Stack Developer with 5 to 8+ years of experience, ready to join IAI Solution Pvt Ltd in Bengaluru, India. IAI Solution operates at the forefront of applied AI, crafting intelligent systems that deliver actionable insights across domains. As a Full Stack Developer, you will be responsible for end-to-end development using Python, React.js, and Next.js. Your role will involve working with FastAPI, Django, Node.js, and cloud platforms like Azure or AWS to build and deploy scalable systems in a fast-paced, agile environment. Your key responsibilities will include developing and maintaining web applications, building responsive UIs, designing robust backend APIs, working with cloud platforms for deployment, managing DevOps tasks, setting up CI/CD pipelines, optimizing database schemas, and collaborating with cross-functional teams to deliver high-quality features on time. You will also troubleshoot, debug, and improve application performance and security while taking ownership of assigned modules/features and contributing to technical planning and architecture discussions. To excel in this role, you must have strong hands-on experience with Python and backend frameworks like FastAPI, Django, or Flask, as well as proficiency in frontend development using React.js and Next.js. You should be comfortable building and consuming RESTful APIs, have a solid understanding of database design and queries, practical experience with cloud platforms, familiarity with containerization and orchestration tools, and knowledge of Infrastructure as Code and CI/CD pipelines. Your problem-solving, debugging, and communication skills will be essential, along with your ability to work in an agile development environment and manage ambiguity and rapid iterations. The technical stack you will be working with includes React.js, Next.js, Python, FastAPI, Django, Node.js, Azure, AWS, Docker, Kubernetes, Terraform, GitHub Actions, Azure DevOps, PostgreSQL, MongoDB, and Redis. In return, you can expect competitive compensation, performance incentives, a high-impact role in a fast-moving environment, the opportunity to lead mission-critical initiatives, a flexible work culture, learning support, and health benefits. Join IAI Solution Pvt Ltd to be part of a team that thrives in high-velocity environments and is passionate about building scalable and impactful systems.,
Posted 3 days ago
2.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
You are an experienced Data Engineer with expertise in PySpark, Snowflake, and AWS. Your role involves designing, developing, and optimizing data pipelines and workflows in a cloud-based environment, utilizing AWS services, PySpark, and Snowflake for data processing and analytics. Your key responsibilities include designing and implementing scalable ETL pipelines using PySpark on AWS, developing and optimizing data workflows for Snowflake integration, managing and configuring AWS services like S3, Lambda, Glue, EMR, and Redshift, collaborating with data analysts and business teams to understand requirements and deliver solutions, ensuring data security and compliance with best practices in AWS and Snowflake environments, monitoring and troubleshooting data pipelines and workflows for performance and reliability, and writing efficient, reusable, and maintainable code for data processing and transformation. You should possess strong experience with AWS services (S3, Lambda, Glue, MSK, etc.), proficiency in PySpark for large-scale data processing, hands-on experience with Snowflake for data warehousing and analytics, solid understanding of SQL and database optimization techniques, knowledge of data lake and data warehouse architectures, familiarity with CI/CD pipelines and version control systems (e.g., Git), strong problem-solving and debugging skills, experience with Terraform or CloudFormation for infrastructure as code, knowledge of Python for scripting and automation, familiarity with Apache Airflow for workflow orchestration, understanding of data governance and security best practices, and certification in AWS or Snowflake is a plus. You should have a Bachelor's degree in Computer Science, Engineering, or a related field with 6 to 10 years of experience, 5+ years of experience in AWS cloud engineering, and 2+ years of experience with PySpark and Snowflake. This position falls under the Technology Job Family Group and specifically in the Digital Software Engineering Job Family. It is a full-time role. Please refer to the above requirements for the most relevant skills needed for this position. For additional skills, you can review the details provided above or reach out to the recruiter for more information. If you require a reasonable accommodation due to a disability to use our search tools or apply for a career opportunity, please review Accessibility at Citi. You can also view Citi's EEO Policy Statement and the Know Your Rights poster.,
Posted 3 days ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Software Engineer working in the area of Software Engineering, you will be responsible for the development, maintenance, and optimization of software solutions and applications. You will apply scientific methods to analyze and solve software engineering problems, developing and applying software engineering practices and knowledge in research, design, and development. Your work will involve original thought, judgement, and supervision of other software engineers, collaborating with team members and stakeholders. You should have proficiency in Cloud Platforms, especially GCP, CI/CD Tools like GitHub Actions, DevOps Tools including Terraform and Cloud Resource Manager, as well as Datadog. Experience in Containerization and Orchestration with Docker and Kubernetes, along with Scripting and Automation skills in Python and Bash is required. Knowledge of DevSecOps principles and tools such as SonarQube and dependency scanning is essential. Basic familiarity with Java Core, Spring Boot, Gradle, Open Feign, and REST is also expected. In specific grade-related responsibilities, you are expected to be highly respected, experienced, and trusted, mastering all phases of the software development lifecycle with innovation and industrialization. You will demonstrate dedication to business objectives, operate independently in complex environments, and take responsibility for significant aspects of activities. Your decision-making should consider the bigger picture, commercial aspects, and long-term partnerships with clients. Leadership skills balancing business, technical, and people objectives are crucial, along with involvement in recruitment and development activities. Your verbal communication skills should be strong to effectively collaborate with team members and stakeholders.,
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
chandigarh
On-site
We are seeking a Cloud Transition Engineer to implement cloud infrastructure and services as per approved architecture designs, ensuring a smooth transition of these services into operational support. You will play a crucial role in bridging the gap between design and operations, ensuring the efficient, secure, and fully supported delivery of new or modified services. Collaborating closely with cross-functional teams, you will validate infrastructure builds, coordinate deployment activities, and ensure all technical and operational requirements are met. This position is vital in maintaining service continuity and enabling scalable, cloud-first solutions across the organization. Your responsibilities in this role will include implementing Azure infrastructure and services based on architectural specifications, building, configuring, and validating cloud environments to meet project and operational needs, collaborating with various teams to ensure smooth service transitions, creating and maintaining user documentation, conducting service readiness assessments, facilitating knowledge transfer and training for support teams, identifying and mitigating risks related to service implementation and transition, ensuring compliance with internal standards, security policies, and governance frameworks, supporting automation and deployment using tools like ARM templates, Bicep, or Terraform, and participating in post-transition reviews and continuous improvement efforts. To be successful in this role, you should possess a Bachelor's degree in computer science, Information Technology, Engineering, or a related field (or equivalent experience), proven experience in IT infrastructure or cloud engineering roles with a focus on Microsoft Azure, demonstrated experience in implementing and transitioning cloud-based solutions in enterprise environments, proficiency in infrastructure-as-code tools such as ARM templates, Bicep, or Terraform, hands-on experience with CI/CD pipelines and deployment automation, and a proven track record of working independently on complex tasks while effectively collaborating with cross-functional teams. Preferred qualifications that set you apart include strong documentation, fixing, and communication skills, Microsoft Azure certifications (e.g., Azure Administrator Associate, Azure Solutions Architect), and experience mentoring junior engineers or leading technical workstreams. At Emerson, we prioritize a workplace where every employee is valued, respected, and empowered to grow. We foster an environment that encourages innovation, collaboration, and diverse perspectives because we believe that great ideas come from great teams. Our commitment to ongoing career development and cultivating an inclusive culture ensures you have the support to thrive. Whether through mentorship, training, or leadership opportunities, we invest in your success so you can make a lasting impact. We believe diverse teams working together are key to driving growth and delivering business results.,
Posted 3 days ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
You will be responsible for planning, implementing, managing, and maintaining security systems such as antimalware solutions, vulnerability management solutions, and SIEM solutions. Your role will involve monitoring and investigating security alerts from various sources, providing incident response, and identifying potential weaknesses within the organization's network and systems to recommend solutions. You will also be required to take up security initiatives to enhance the overall security posture of the organization and document SOPs, metrics, and reports as necessary. Additionally, providing Root Cause Analysis (RCAs) for security incidents and collaborating with different teams and departments to address vulnerabilities, security incidents, and drive initiatives will be part of your responsibilities. To be successful in this role, you should possess industry-recognized professional certifications such as CISSP, GCSA, CND, or similar. Your experience in computer security, risk analysis, audit, and compliance objectives will be crucial. Familiarity with Network and Web Security tools like Palo Alto, ForeScout, and Zscaler, as well as experience with AWS Cloud Environment and Terraform, will be advantageous. Moreover, expertise in Privileged Access Management solutions, SIEM/SOAR, NDR, EDR, VM, and Data Security solutions is desired. You must have a proven ability to make decisions and perform complex problem-solving activities under pressure. Creativity, out-of-the-box thinking, and the ability to work on multiple projects simultaneously in fast-paced environments are essential. Strong communication, presentation, and writing skills are required, along with the ability to share knowledge and collaborate with team members, managers, and customers. Your organizational skills, results-oriented approach, and capability to work in a fast-paced global environment will be critical to your success in this role.,
Posted 3 days ago
9.0 - 13.0 years
0 Lacs
pune, maharashtra
On-site
As a DevOps Lead with over 9 years of experience, you will be responsible for providing infrastructure platform, services, and tooling to support engineering teams. Your role will involve building automation, processes, and frameworks to automate the provisioning of infrastructure and platform. You will work closely with agile teams, collaborating with Developers, Product Owners, and Operations to define requirements, design, and implementation. Additionally, you will lead and manage complex initiatives and projects independently. You will be expected to identify inefficient processes and make trade-offs on where to achieve the most ROI. Driving consistency across scrum teams in delivering products in a repeatable, reliable, and secure manner is a key aspect of your role. You will also bridge any knowledge gaps between Operations and Development teams, ensuring efficient communication and collaboration. In this role, you must work with exceptional attention to detail in a high-security environment, as Ethoca is PCI-DSS Level 1 certified and must meet multiple audit requirements from third-party auditors. Your responsibilities will include interpreting and applying security best practices to create, harden, and secure OS images and CI/CD pipelines. To excel in this position, you should possess an advanced level of expertise in implementing cloud-based practices such as efficiency, repeatability, instrumentation, scalability, and security, with experience on Azure and/or Google Cloud. Experience with hybrid cloud environments and migrating traditional workloads to the public cloud, focusing on Azure and/or Google Cloud, is also crucial. You should have excellent knowledge of platform services like VMs, storage, network, load balancing, DNS, middleware, and runtime on public cloud platforms. Strong scripting automation experience in languages like Python, Ruby, Perl, and Bash is required, along with expertise in configuration management and deployment tools such as Terraform, Jenkins, and Azure DevOps. Experience in CI/CD with a focus on managing secrets, tokens, certificates, and pipeline security and compliance for environments storing sensitive PCI/PII data is essential. A good understanding of data governance and regulations impacting data storage and processing solutions like GDPR and PCI is also necessary. Knowledge of Source Control, including branching and workflows, and integration with build automation tooling, as well as familiarity with deployment models using Docker and Kubernetes, will be valuable in this role.,
Posted 3 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Project Role : Cloud Platform Engineer Project Role Description : Designs, builds, tests, and deploys cloud application solutions that integrate cloud and non-cloud infrastructure. Can deploy infrastructure and platform environments, creates a proof of architecture to test architecture viability, security and performance. Must have skills : Data Modeling Techniques and Methodologies Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Cloud Platform Engineer, you will be responsible for designing, building, testing, and deploying cloud application solutions that seamlessly integrate both cloud and non-cloud infrastructure. Your typical day will involve collaborating with various teams to ensure the architecture's viability, security, and performance while creating proofs of concept to validate your designs. You will engage in hands-on development and troubleshooting, ensuring that the solutions meet the required standards and specifications. Additionally, you will be involved in continuous improvement efforts, optimizing existing systems and processes to enhance efficiency and effectiveness in cloud operations. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor and evaluate the performance of cloud applications to ensure optimal functionality. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Modeling Techniques and Methodologies. - Good To Have Skills: Experience with cloud service providers such as AWS, Azure, or Google Cloud Platform. - Strong understanding of cloud architecture and deployment strategies. - Experience with infrastructure as code tools like Terraform or CloudFormation. - Familiarity with containerization technologies such as Docker and Kubernetes. Additional Information: - The candidate should have minimum 7.5 years of experience in Data Modeling Techniques and Methodologies. - This position is based in Hyderabad. - A 15 years full time education is required., 15 years full time education
Posted 3 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
JR0126084 Associate, Technology Operations – Pune, India Want to work on global strategic initiatives with a FinTech company that is poised to revolutionize the industry? Join the team and help shape our company’s digital capabilities and revolutionize an industry! Join Western Union as an Associate, Technology Operations. Western Union powers your pursuit. The Associate, Technology Operations is expected to own solution and services delivery of Database Engineering and Operations for both system-level and application-level databases: On-premises and on the Cloud and using insight from customers and colleagues worldwide to improve financial services for families, small businesses, multinational corporations, and non-profit organizations. Role Responsibilities To own and manage the database portfolio (Oracle, MSSQL, DB2, AWS RDS MySQL/PostgreSQL/Aurora MySQL/ Aurora PostgreSQL/Redis/Couchbase/Cassandra) Enable the delivery of high quality of service of database infrastructure support and ensure service support and delivery processes are in place to meet business needs Lead virtual teams, 3rd parties and 3rd party services. You will handle internal and third-party service review meetings covering performance, service improvements, quality and processes Design and implement Highly Available (HA), Scalable, fit for use large database solutions and advocating best practices across On-prem and AWS Cloud Collaborate with architecture, engineering, support, teams in designing, deploying and scalable database solutions Ability to handle multiple projects and deadlines in a fast-paced environment independently Advanced troubleshooting skills: Database performance tuning, issue resolutions, on-going replication issues Adopt Automation of day-to-day administration and maintenance in Cloud/On-premise and emergent Engineering best practices in CI/CD implementations Define and manage best practices around database security and help to ensure security and compliance across all database systems This position is a stakeholder-facing role and requires that you establish and manage expectations with the business teams and drive your team to achieve the expected service levels. Identify and drive methodologies and processes that support world class standards for production stability and identify and manage key targeted areas for improvement. You will conduct regular team huddles, periodic problem-solving workshops, process confirmation reviews and 1x1 coaching sessions. Role Requirements 3+ years of experience working in Fintech, Ecommerce, IT or consulting organization, of which at least 1 year of experience designing and implementing database systems in On-premise and AWS cloud. Hands-on AWS experience is a must. Professional experience working on on-prem/AWS Oracle/SQ Server / DB2 / MySQL / PostgreSQL / Couchbase / Cassandra databases Hands on experience in managing at least one database replication technology: Oracle GoldenGate / IBM MQ / HVR (Fire Tran)/ AWS DMS Prior experience in leading global virtual teams, 3rd parties and 3rd party services. Proven experience of using SQL & NoSQL datastores Hands-on experience on AWS Cloud - certification in AWS preferred Experience with database DevOps practices, infrastructure as a code automation tools such as Terraform, AWS Cloud Formation Experience in implementing solutions using various flavors of operating systems including Linux, Load balancers, Liquibase DB Change Automation, HA/DR & storage architecture Experience in database migrations from On-prem to AWS Cloud. We make financial services accessible to humans everywhere. Join us for what’s next. Western Union is positioned to become the world’s most accessible financial services company transforming lives and communities. To support this, we have launched a Digital Banking Service and Wallet across several European markets to enhance our customers’ experiences by offering a state-of-the-art digital Ecosystem. More than moving money, we design easy-to-use products and services for our digital and physical financial ecosystem that help our customers move forward. Just as we help our global customers prosper, we support our employees in achieving their professional aspirations. You’ll have plenty of opportunities to learn new skills and build a career, as well as receive a great compensation package. If you’re ready to help drive the future of financial services, it’s time for the Western Union. Learn more about our purpose and people at https://careers.westernunion.com. Benefits You will also have access to short-term incentives, multiple health insurance options, accident and life insurance, and access to best-in-class development platforms, to name a few (https://careers.westernunion.com/global-benefits/). Please see the location-specific benefits below and note that your Recruiter may share additional role-specific benefits during your interview process or in an offer of employment. Your India Specific Benefits Include Employees Provident Fund [EPF] Gratuity Payment Public holidays Annual Leave, Sick leave, Compensatory leave, and Maternity / Paternity leave Annual Health Check up Hospitalization Insurance Coverage (Mediclaim) Group Life Insurance, Group Personal Accident Insurance Coverage, Business Travel Insurance Relocation Benefit Western Union values in-person collaboration, learning, and ideation whenever possible. We believe this creates value through common ways of working and supports the execution of enterprise objectives which will ultimately help us achieve our strategic goals. By connecting face-to-face, we are better able to learn from our peers, problem-solve together, and innovate. Our Hybrid Work Model categorizes each role into one of three categories. Western Union has determined the category of this role to be Hybrid. This is defined as a flexible working arrangement that enables employees to divide their time between working from home and working from an office location. The expectation is to work from the office a minimum of three days a week. We are passionate about diversity. Our commitment is to provide an inclusive culture that celebrates the unique backgrounds and perspectives of our global teams while reflecting the communities we serve. We do not discriminate based on race, color, national origin, religion, political affiliation, sex (including pregnancy), sexual orientation, gender identity, age, disability, marital status, or veteran status. The company will provide accommodation to applicants, including those with disabilities, during the recruitment process, following applicable laws. Estimated Job Posting End Date 08-06-2025 This application window is a good-faith estimate of the time that this posting will remain open. This posting will be promptly updated if the deadline is extended or the role is filled.
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France