Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 6.0 years
4 - 8 Lacs
Gurgaon
Remote
Experience Required: 3-6 years Location: Gurgaon Department: Product and Engineering Working Days: Alternate Saturdays Working (1st and 3rd) Key Responsibilities Design, implement, and maintain highly available and scalable infrastructure using AWS Cloud Services . Build and manage Kubernetes clusters (EKS, self-managed) to ensure reliable deployment and scaling of microservices. Develop Infrastructure-as-Code using Terraform , ensuring modular, reusable, and secure provisioning. Containerize applications and optimize Docker images for performance and security. Ensure CI/CD pipelines (Jenkins, GitHub Actions, etc.) are optimized for fast and secure deployments. Drive SRE principles including monitoring, alerting, SLIs/SLOs, and incident response. Set up and manage observability tools (Prometheus, Grafana, ELK, Datadog, etc.). Automate routine tasks with scripting languages (Python, Bash, etc.). Lead capacity planning, auto-scaling , and cost optimization efforts across cloud infrastructure. Collaborate closely with development teams to enable DevSecOps best practices. Participate in on-call rotations , handle outages with calm, and conduct postmortems. Must-Have Technical Skills Kubernetes (EKS, Helm, Operators) Docker & Docker Compose Terraform (modular, state management, remote backends) AWS (EC2, VPC, S3, RDS, IAM, CloudWatch, ECS/EKS) Linux system administration CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions) Logging & monitoring tools: ELK, Prometheus, Grafana, CloudWatch Site Reliability Engineering practices Load balancing, autoscaling, and HA architectures Good-To-Have GCP or Azure exposure Service Mesh (Istio, Linkerd) Secrets management (Vault, AWS Secrets Manager) Security hardening of containers and infrastructure Chaos engineering exposure Knowledge of networking (DNS, firewalls, VPNs) Soft Skills Strong problem-solving attitude; calm under pressure Good documentation and communication skills Ownership mindset with a drive to automate everything Collaborative and proactive with cross-functional teams
Posted 3 weeks ago
4.0 years
0 Lacs
Thane, Maharashtra, India
On-site
DevOps Engineer - Kubernetes Specialist Experience: 4 - 8 Years Exp Salary : Competitive Preferred Notice Period : Within 30 Days Opportunity Type: Hybrid (Mumbai) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Kubernetes , CI/CD , Google Cloud Ripplehire (One of Uplers' Clients) is Looking for: DevOps Engineer - Kubernetes Specialist who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description We are seeking an experienced DevOps Engineer with deep expertise in Kubernetes primarily Google Kubernetes Engine (GKE) to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and maintaining scalable containerized infrastructure, with a strong focus on cost optimization and operational excellence. Key Responsibilities & Required Skills Kubernetes Infrastructure & Deployment Responsibilities: Design, deploy, and manage production-grade Kubernetes clusters Perform cluster upgrades, patching, and maintenance with minimal downtime Deploy and manage multiple microservices with ingress controllers and networking Configure storage solutions and persistent volumes for stateful applications Required Skills: 3+ years of hands-on Kubernetes experience in production environments, primarily on Google Kubernetes Engine (GKE) Strong experience with Google Cloud Platform (GCP) and GKE-specific features Deep understanding of Docker, container orchestration, and GCP networking concepts Knowledge of Helm charts, YAML/JSON configuration, and service mesh technologies CI/CD, Monitoring & Automation Responsibilities: Design and implement robust CI/CD pipelines for Kubernetes deployments Implement comprehensive monitoring, logging, and alerting solutions Leverage AI tools and automation to improve team efficiency and task speed Create dashboards and implement GitOps workflows Required Skills: Proficiency with Jenkins, GitLab CI, GitHub Actions, or similar CI/CD platforms Experience with Prometheus, Grafana, ELK stack, or similar monitoring solutions Knowledge of Infrastructure as Code tools (Terraform, Ansible) Familiarity with AI/ML tools for DevOps automation and efficiency improvements Cost Optimization & Application Management Responsibilities: Analyze and optimize resource utilization across Kubernetes workloads Implement right-sizing strategies for services and batch jobs Deploy and manage Java-based applications and MySQL databases Configure horizontal/vertical pod autoscaling and resource management Required Skills: Experience with resource management, capacity planning, and cost optimization Understanding of Java application deployment and MySQL database administration Knowledge of database operators, StatefulSets, and backup/recovery solutions Proficiency in scripting languages (Bash, Python, or Go) Preferred Qualifications Experience with additional Google Cloud Platform services (Compute Engine, Cloud Storage, Cloud SQL, Cloud Build) Knowledge of GKE advanced features (Workload Identity, Binary Authorization, Config Connector) Experience with other cloud Kubernetes services (AWS EKS, Azure AKS) is a plus Knowledge of container security tools and chaos engineering Experience with multi-cluster GKE deployments and service mesh (Istio, Linkerd) Familiarity with AI-powered monitoring and predictive analytics platforms Key Competencies Strong problem-solving skills with innovative mindset toward AI-driven solutions Excellent communication and collaboration abilities Ability to work in fast-paced, agile environments with attention to detail Proactive approach to identifying issues using modern tools and AI assistance Ability to mentor team members and promote AI adoption for team efficiency Join our team and help shape the future of our DevOps practices with cutting-edge containerized infrastructure. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Ripplehire is a recruitment SaaS for companies to identify correct candidates from employees' social networks and gamify the employee referral program with contests and referral bonus to engage employees in the recruitment process. Developed and managed by Trampoline Tech Private Limited. Recognized by InTech50 as one of the Top 50 innovative enterprise software companies coming out of India and; NHRD (HR Association) Staff Pick for the most innovative social recruiting tool in India. Used by 7 clients as of July 2014. It is a tool available on the subscription-based pricing model. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
Were Hiring: FinOps Consultant at OpsLyft Are you passionate about cloud cost optimization and FinOps strategies Do you enjoy collaborating with finance, engineering, and cloud teams to drive financial efficiency If so, wed love to have you on board. At OpsLyft, we help businesses maximize efficiency and reduce cloud expenses through automation, actionable insights, and real-time cost governance. What Youll Do: Partner with clients to develop and implement FinOps best practices. Analyze cloud costs and provide data-driven cost-saving strategies. Set up budgeting, forecasting, and governance frameworks. Optimize AWS, Azure, and GCP costs (RIs, Savings Plans, Auto-scaling). Automate cost monitoring and anomaly detection with engineering teams. Recommend and implement FinOps tools like Kubecost, Apptio, CloudHealth. What Were Looking For: Strong FinOps and cloud cost management experience (AWS, Azure, GCP). Ability to analyze cost trends, budgets, and financial reports. Experience working with cross-functional teams to drive cost transparency. Nice-to-have: Scripting and automation skills (Python, Bash, Terraform). Bonus: FinOps certifications (FCP, AWS, GCP Cloud Financial Management). Why Join Us Make an impact Help businesses save millions on cloud costs. Work with top FinOps and engineering leaders in cutting-edge cloud financial management. Stay ahead of the curve in the fast-evolving FinOps ecosystem. Interested Lets Talk! Send your resume to hr@opslyft.com.,
Posted 3 weeks ago
2.0 - 3.8 years
5 - 7 Lacs
Hyderābād
Remote
Our fast-paced and collaborative environment inspires us to create, think, and challenge each other in ways that make our solutions and our teams better. Whether you’re interested in engineering or development, marketing or sales, or something else – if this sounds like you, then we’d love to hear from you! We are headquartered in Denver, Colorado, with offices in the US, Canada, and India. DevOps II JD Vertafore is a leading technology company whose innovative software solution are advancing the insurance industry. Our suite of products provides solutions to our customers that help them better manage their business, boost their productivity and efficiencies, and lower costs while strengthening relationships. Our mission is to move InsurTech forward by putting people at the heart of the industry. We are leading the way with product innovation, technology partnerships, and focusing on customer success. Our fast-paced and collaborative environment inspires us to create, think, and challenge each other in ways that make our solutions and our teams better. We are headquartered in Denver, Colorado, with offices across the U.S., Canada, and India. JOB DESCRIPTION Does building out a top-tier DevOps team, and everything that comes with it, sound intriguing? DevOps senior software engineer/team lead embedded in an energetic DevOps agile team. Our DevOps teams are tightly coupled and integrated with the culture, tools, practices and patterns of the rest of our software engineering organization. They not only “keep the lights on” for our systems and networks, but also empower our other development teams with cutting edge tools and capabilities to bring Vertafore products to market as quickly as possible. All of this will be accomplished with cutting edge, lean-agile, software development methodologies. Core Requirements and Responsibilities: Essential job functions included but are not limited to the following: You will lead the team in building out our continuous delivery infrastructure and processes for all our products utilizing state of the art technologies. You will be hands on leading the architecture and design of the frameworks for the automated continuous deployment of application code, the operational and security monitoring and care of the infrastructure and software platforms. You and your team will serve as the liaison between the agile development teams, SaaS operations, and external cloud providers for deployment, operational efficiency, security, and business continuity. Why Vertafore is the place for you: *Canada Only The opportunity to work in a space where modern technology meets a stable and vital industry Medical, vision & dental plans Life, AD&D Short Term and Long Term Disability Pension Plan & Employer Match Maternity, Paternity and Parental Leave Employee and Family Assistance Program (EFAP) Education Assistance Additional programs - Employee Referral and Internal Recognition Why Vertafore is the place for you: *US Only The opportunity to work in a space where modern technology meets a stable and vital industry We have a Flexible First work environment! Our North America team members use our offices for collaboration, community and team-building, with members asked to sometimes come into an office and/or travel depending on job responsibilities. Other times, our teams work from home or a similar environment. Medical, vision & dental plans PPO & high-deductible options Health Savings Account & Flexible Spending Accounts Options: Health Care FSA Dental & Vision FSA Dependent Care FSA Commuter FSA Life, AD&D (Basic & Supplemental), and Disability 401(k) Retirement Savings Plain & Employer Match Supplemental Plans - Pet insurance, Hospital Indemnity, and Accident Insurance Parental Leave & Adoption Assistance Employee Assistance Program (EAP) Education & Legal Assistance Additional programs - Tuition Reimbursement, Employee Referral, Internal Recognition, and Wellness Commuter Benefits (Denver) The selected candidate must be legally authorized to work in the United States. The above statements are intended to describe the general nature and level of work being performed by people assigned to this job. They are not intended to be an exhaustive list of all the job responsibilities, duties, skill, or working conditions. In addition, this document does not create an employment contract, implied or otherwise, other than an "at will" relationship. Vertafore strongly supports equal employment opportunity for all applicants regardless of race, color, religion, sex, gender identity, pregnancy, national origin, ancestry, citizenship, age, marital status, physical disability, mental disability, medical condition, sexual orientation, genetic information, or any other characteristic protected by state or federal law. The Professional Services (PS) and Customer Success (CX) bonus plans are a quarterly monetary bonus plan based upon individual and practice performance against specific business metrics. Eligibility is determined by several factors including: start date, good standing in the company, and actives status at time of payout. The Vertafore Incentive Plan (VIP) is an annual monetary bonus for eligible employees based on both individual and company performance. Eligibility is determined by several factors including: start date, good standing in the company, and actives status at time of payout. Commission plans are tailored to each sales role but common components include quota, MBO's and ABPMs. Salespeople receive their formal compensation plan within 30 days of hire. Vertafore is a drug free workplace and conducts preemployment drug and background screenings. We do not accept resumes from agencies, headhunters or other suppliers who have not signed a formal agreement with us. We want to make sure our recruiting process is accessible for everyone. if you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact recruiting@vertafore.com Just a note, this contact information is for accommodation requests only. Knowledge, Skills, Abilities and Qualifications: Bachelor’s degree in Computer Science (or related technical field) or equivalent practical experience 2 – 3.8 years professional experience in DevOps Have excellent communication and interpersonal skills and ability to work with other developers, business analysts, testing specialists and product owners to create stellar software products Have a strong sense of ownership. Strong diagnostic, analytical, and design skills. Closely follow industry trends and the open source community, identifying and proactively advocating for cutting edge tools that would optimize operational performance and/or reduce operating costs. Have experience in regulated environments. Care about quality and know what it means to ship high quality code and infrastructure. Be curious and avid learner Communicate clearly to explain and defend design decisions. Self-motivated and excellent problem-solvers. Driven to improve, personally and professionally. Mentor and inspire others to raise the bar for everyone around them. Love to collaborate with their peers, designing pragmatic solutions. Operate best in a fast-paced, flexible work environment. Experience with Agile software development. Experience in mission critical Cloud operations and/or DevOps engineering Have experience with AWS technologies and/or developing with Distributed Systems using Ansible, Puppet, or Jenkins. Strong understanding and experience working with Windows, Unix and Linux operating systems; specifically troubleshooting and providing administration. Have experience with operating and tuning relational and NoSQL databases Strong experience with Terraform, Jenkins. Have experience with performing support and administrative tasks within Amazon Web Services (AWS), Azure, OpenStack, or other cloud infrastructure technologies. Proficiency in managing systems within multiple sites including fail-over redundancy & autoscaling (knowledge of best practices and IT operations in an always-up, always-available service). Have experience deploying, maintaining and managing secure systems. A background in software development, preferably Web applications. Proficient in monitoring and logging tools such as ELK Stack (Elasticsearch, Logstash, and Kibana). Have experience with build & deploy tools (Jenkins). Have knowledge of IP networking, VPN's, DNS, load balancing and firewalling. Enjoy solving problems through the entire application stack. Have been on the front lines of production outages, both working to resolve the outage as well as root-cause the problem and provide long-term resolution or early identification strategies
Posted 3 weeks ago
4.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a Senior/Lead DevOps Engineer – Databricks with strong experience in Azure Databricks to design, implement, and optimize Databricks infrastructure, CI/CD pipelines, and ML model deployment. The ideal candidate will be responsible for Databricks environment setup, networking, cluster management, access control, CI/CD automation, model deployment, asset bundle management, and monitoring. This role requires hands-on experience with DevOps best practices, infrastructure automation, and cloud-native architectures. Required Skills & Experience • 4 to 10 years of experience in DevOps with a strong focus on Azure Databricks. • Hands-on experience with Azure networking, VNET integration, and firewall rules. • Strong knowledge of Databricks cluster management, job scheduling, and optimization. • Expertise in CI/CD pipeline development for Databricks and ML models using Azure DevOps, Terraform, or GitHub Actions. • Experience with Databricks Asset Bundles (DAB) for packaging and deployment. • Proficiency in RBAC, Unity Catalog, and workspace access control. • Experience with Infrastructure as Code (IaC) tools like Terraform, ARM Templates, or Bicep. • Strong scripting skills in Python, Bash, or PowerShell. • Familiarity with monitoring tools (Azure Monitor, Prometheus, or Datadog). Preferred Qualifications • Databricks Certified Associate/Professional Administrator or equivalent certification. • Experience with AWS or GCP Databricks in addition to Azure. • Knowledge of Delta Live Tables (DLT), Databricks SQL, and MLflow. • Exposure to Kubernetes (AKS, EKS, or GKE) for ML model deployment. Roles & Responsibilities Key Responsibilities 1. Databricks Infrastructure Setup & Management • Configure and manage Azure Databricks workspaces, networking, and security. • Set up networking components like VNET integration, private endpoints, and firewall configurations. • Implement scalability strategies for efficient resource utilization. • Ensure high availability, resilience, and security of Databricks infrastructure. 2. Cluster & Capacity Management • Manage Databricks clusters, including autoscaling, instance selection, and performance tuning. • Optimize compute resources to minimize costs while maintaining performance. • Implement cluster policies and governance controls. 3. User & Access Management • Implement RBAC (Role-Based Access Control) and IAM (Identity and Access Management) for users and services. • Manage Databricks Unity Catalog and enforce workspace-level access controls. • Define and enforce security policies across Databricks workspaces. 4. CI/CD Automation for Databricks & ML Models • Develop and manage CI/CD pipelines for Databricks Notebooks, Jobs, and ML models using Azure DevOps, GitHub Actions, or Jenkins. • Automate Databricks infrastructure deployment using Terraform, ARM Templates, or Bicep. • Implement automated testing, version control, and rollback strategies for Databricks workloads. • Integrate Databricks Asset Bundles (DAB) for standardized and repeatable Databricks deployments. 5. Databricks Asset Bundle Management • Implement Databricks Asset Bundles (DAB) to package, version, and deploy Databricks workflows efficiently. • Automate workspace configuration, job definitions, and dependencies using DAB. • Ensure traceability, rollback, and version control of deployed assets. • Integrate DAB with CI/CD pipelines for seamless deployment. 6. ML Model Deployment & Monitoring • Deploy ML models using Databricks MLflow, Azure Machine Learning, or Kubernetes (AKS). • Optimize model performance and enable real-time inference. • Implement model monitoring, drift detection, and automated retraining pipelines. 7. Monitoring, Troubleshooting & Performance Optimization • Set up Databricks monitoring and logging using Azure Monitor, Datadog, or Prometheus. • Analyze cluster performance metrics, audit logs, and cost insights to optimize workloads. • Troubleshoot Databricks infrastructure, pipelines, and deployment issues.
Posted 3 weeks ago
4.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are seeking a Senior/Lead DevOps Engineer – Databricks with strong experience in Azure Databricks to design, implement, and optimize Databricks infrastructure, CI/CD pipelines, and ML model deployment. The ideal candidate will be responsible for Databricks environment setup, networking, cluster management, access control, CI/CD automation, model deployment, asset bundle management, and monitoring. This role requires hands-on experience with DevOps best practices, infrastructure automation, and cloud-native architectures. Required Skills & Experience • 4 to 10 years of experience in DevOps with a strong focus on Azure Databricks. • Hands-on experience with Azure networking, VNET integration, and firewall rules. • Strong knowledge of Databricks cluster management, job scheduling, and optimization. • Expertise in CI/CD pipeline development for Databricks and ML models using Azure DevOps, Terraform, or GitHub Actions. • Experience with Databricks Asset Bundles (DAB) for packaging and deployment. • Proficiency in RBAC, Unity Catalog, and workspace access control. • Experience with Infrastructure as Code (IaC) tools like Terraform, ARM Templates, or Bicep. • Strong scripting skills in Python, Bash, or PowerShell. • Familiarity with monitoring tools (Azure Monitor, Prometheus, or Datadog). Preferred Qualifications • Databricks Certified Associate/Professional Administrator or equivalent certification. • Experience with AWS or GCP Databricks in addition to Azure. • Knowledge of Delta Live Tables (DLT), Databricks SQL, and MLflow. • Exposure to Kubernetes (AKS, EKS, or GKE) for ML model deployment. Roles & Responsibilities Key Responsibilities 1. Databricks Infrastructure Setup & Management • Configure and manage Azure Databricks workspaces, networking, and security. • Set up networking components like VNET integration, private endpoints, and firewall configurations. • Implement scalability strategies for efficient resource utilization. • Ensure high availability, resilience, and security of Databricks infrastructure. 2. Cluster & Capacity Management • Manage Databricks clusters, including autoscaling, instance selection, and performance tuning. • Optimize compute resources to minimize costs while maintaining performance. • Implement cluster policies and governance controls. 3. User & Access Management • Implement RBAC (Role-Based Access Control) and IAM (Identity and Access Management) for users and services. • Manage Databricks Unity Catalog and enforce workspace-level access controls. • Define and enforce security policies across Databricks workspaces. 4. CI/CD Automation for Databricks & ML Models • Develop and manage CI/CD pipelines for Databricks Notebooks, Jobs, and ML models using Azure DevOps, GitHub Actions, or Jenkins. • Automate Databricks infrastructure deployment using Terraform, ARM Templates, or Bicep. • Implement automated testing, version control, and rollback strategies for Databricks workloads. • Integrate Databricks Asset Bundles (DAB) for standardized and repeatable Databricks deployments. 5. Databricks Asset Bundle Management • Implement Databricks Asset Bundles (DAB) to package, version, and deploy Databricks workflows efficiently. • Automate workspace configuration, job definitions, and dependencies using DAB. • Ensure traceability, rollback, and version control of deployed assets. • Integrate DAB with CI/CD pipelines for seamless deployment. 6. ML Model Deployment & Monitoring • Deploy ML models using Databricks MLflow, Azure Machine Learning, or Kubernetes (AKS). • Optimize model performance and enable real-time inference. • Implement model monitoring, drift detection, and automated retraining pipelines. 7. Monitoring, Troubleshooting & Performance Optimization • Set up Databricks monitoring and logging using Azure Monitor, Datadog, or Prometheus. • Analyze cluster performance metrics, audit logs, and cost insights to optimize workloads. • Troubleshoot Databricks infrastructure, pipelines, and deployment issues.
Posted 3 weeks ago
2.0 - 5.0 years
6 - 11 Lacs
Hyderabad
Work from Office
Roles & Responsibilities: Take ownership of architecture design and development of scalable and distributed software systems. Translate business to technical requirements Oversee technical execution, ensuring code quality, adherence to deadlines, and efficient resource allocation Data driven decision making skills with focus on achieving product goals Design and develop data ingestion and processing pipelines capable of handling largescale events. Responsible for the complete software development lifecycle, including requirements analysis, design, coding, testing, and deployment. Utilize AWS services/ Azure services like IAM, Monitoring, Load Balancing, Autoscaling, Database, Networking, storage, ECR, AKS, ACR etc. Implement DevOps practices using tools like Docker, Kubernetes to ensure continuous integration and delivery. Develop DevOps scripts for automation and monitoring. Collaborate with cross-functional teams, conduct code reviews, and provide guidance on software design and best practices. Qualifications: Bachelors degree in computer science, Information Technology, or a related field (or equivalent work experience). 2-5 years of experience in software development, with relevant work experience. Strong coding skills with proficiency in Python, Java, or C++. Experience with API frameworks both stateless and stateful such as FastAPI, Django, Spring, Spring Boot. Proficient in cloud platforms, specifically AWS, Azure, or GCP. Hands-on experience with DevOps tools including Docker, Kubernetes, and AWS services. Strong understanding of scalable application design principles and experience with security best practices and compliance with privacy regulations. Good knowledge of software engineering practices like version control (GIT), DevOps (Azure DevOps preferred) and Agile or Scrum. Strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience. Experience of SDLC and best practices while development Preferred experience with AI/ML based product development. Experience with Agile methodology for continuous product development and delivery Why you might want to join us: Going to be part of shaping one of the most exciting AI companies Opportunity to learn from peer group including experts in AI, computer vision & robotics to Data Engineering and System Engineering Sharp, motivated co-workers in a fun office environment Our motto: Put employees first. We only succeed when our employees succeed. Think Big. Be ambitious and have audacious goals. Aim for excellence. Quality and excellence count in everything we do. Own it and get it done. Results matter! Embrace each others differences
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! The Challenge Customer Success Engineers are responsible for the partnership between Adobe and our Strategic clients, driving value realization and return on the client's investment. This team are technology-savvy individuals who have experience in DMa and know its value in driving company strategies. You will work directly with our clients to understand business and technical requirements, and to develop solutions to ensure success. This position includes all of the following aspects: Strategic client relationship management. You will be assigned as a designated technical consultant to 5 to 7 customers who are using Adobe Experience Manager. This includes implementing and supporting standard deployment methodologies, managing custom integrations, bridging communication with clients, third party providers, project management, internal engineering and automation engineers. You will have strong focus on client retention and cultivate future projects and qualify new opportunities. There will be frequent interaction with clients including Directors, VPs, and C- level executives of Fortune 500 companies. The CSE role is equally: client facing (developing long term client relationships, keyboard facing (technical operations), and colleague facing (developing your own subject matter expertise, and drawing on that of others in a collaborative environment). What you'll do Provide a great relationship experience for all assigned clients and assist clients to expand their usage and adoption of Adobe products. Be a trusted technical advisor to enable clients to apply our tools to achieve their business objectives by provide resources to answer clients' questions, identifying needs for account customization and further implementation where applicable and ensure that every client contract is renewed. Work closely with Sales Executive and consult with other team members (consulting/project management/engineering services/customer support) to be sure mutual objectives are met in support of client happiness. Communicate consistently with clients throughout the contract lifecycle, calling out meaningful issues where needed. You will maintain client contact and provide status updates for all excellent issues while continuing to handle client expectations, keeping clients satisfied and expectations realistic. You will oversee customer support to ensure timely closure of quality issues and provide project management for professional services requests. Fully understand client requests, documenting and engaging appropriate resources. You will ideally have: Bachelor's degree in business management or similar. Real passion for digital marketing and client success and in the past have demonstrated exceptional customer skills from previous employment. Strong and consistent track record of successfully managing client relationships and technical projects with an excellent work ethic and leadership skills. Self-motivated, reciprocal, very responsible, and passionate about exceeding client expectations and You can understand enterprise internet business models and online processes, terminology, concepts and strategies. You can show excellent social, presentation, and interpersonal skills, both verbal and written. Demonstrated ability to deal with change and excel in high-stress situations and be self- managed, responsive, and dedicated to client success. Duties include: Work with Adobe's AEM, Connect, LiveCycle and other teams to assist in developing new AMIs and deployments of new software. Develop the procedures and routines that we need to implement and improve autoscaling capabilities. Demonstrate Amazon and Azure cloud services and advanced Adobe Command/Control systems to use the next generation cloud management solution Help to develop and support our upgrade systems for enterprise customers as Adobe products develop over time. Collaborate with the teams that provision, customize, monitor, handle and upgrade our cloud hosted Enterprise offering and drive continuous improvements into the management system to support these areas. Skill Requirements: Strong experience with cloud hosting including Microsoft Azure and AWS cloud infrastructure. Strong knowledge of Linux, Windows Server and Java systems Chef. Experience troubleshooting and operating Adobe AEM in an enterprise environment. Experience with long term operation, monitoring and upgrade of Enterprise software. Special consideration given for: Master's degree or other advanced education Prior account management and/or project management experience with Fortune 500 clients Knowledge of and experience with digital marketing technologies Prior experience with customer success in a SaaS, or Managed Services company Experience using digital marketing products and FSI vertical experience Consulting and/or technical training experience Adobe is an equal opportunity employer. We support diversity in the workplace regardless of race, gender, religion, age, sexual orientation, gender identity, disability or veteran status. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.
Posted 3 weeks ago
0 years
0 Lacs
India
Remote
Job Title: Junior Cloud Engineer – Google Cloud Platform (GCP) Location: Remote, Work From India Job Type: Full-time Salary: ₹15,000 - ₹17,000 INR per month About Habot Connect DMCC: Habot Connect DMCC is a dynamic, 100% remote, and bootstrapped startup based in Dubai, UAE. We're passionately dedicated to a vital mission: building a cutting-edge platform to connect parents of children with learning difficulties with the right learning support assistants and suppliers. We foster an open and collaborative culture where performance is recognized and valued. Our team utilizes the latest tools, including Google Workspace, and custom automation solutions, to drive efficiency and impact. Join our innovative journey at Habot Connect DMCC! Are you a passionate and driven fresher eager to launch your career in the world of cloud computing? Habot Connect DMCC is looking for enthusiastic Junior Cloud Engineer (GCP) to join our growing technical team. This is a fantastic opportunity to gain hands-on experience with cutting-edge GCP services, learn from experienced professionals, and contribute to real-world projects. If you have a strong foundation in computer science and a burning desire to grow your skills in cloud technology, we encourage you to apply! What You'll Be Doing (Key Responsibilities): As a GCP Entry-Level Talent, you will actively contribute to our cloud environment under supervision and grow your skills in key areas. Your responsibilities will include: Deploy and manage applications using Cloud Run , App Engine , and GKE (Google Kubernetes Engine) Handle infrastructure configurations and CI/CD pipelines for scalable deployments Monitor and troubleshoot cloud service issues across the stack (logs, errors, autoscaling, etc.) Administer Cloud SQL instances and ensure database performance and availability Support Firebase Hosting environments for static websites and web apps Collaborate closely with backend and frontend teams to ensure seamless cloud integration Write automation scripts or cloud functions when needed for infrastructure tasks Optimize deployments for performance, cost-efficiency, and security What We're Looking For: Education: Bachelor's degree in Computer Science, Information Technology, or a related engineering discipline. Basic understanding of GCP services including: Cloud Run App Engine Kubernetes (GKE) & container orchestration basics Cloud SQL Firebase Hosting Familiarity with CI/CD workflows (Cloud Build, GitHub Actions, etc.) Understanding of cloud logging, monitoring, and alerting systems Familiar with basic networking and IAM configurations in GCP Comfortable using command-line tools, version control (Git), and Linux systems Application Process: Interested candidates are invited to apply for this exciting opportunity. As part of our evaluation process, shortlisted applicants will be required to complete a brief hands-on project in the second round. This task is designed to assess your practical cloud skills, problem-solving approach, and ability to work with core GCP services in a real-world scenario. It’s a chance to showcase how you think and build in a cloud-first environment. If you are confident in your ability to fulfill the requirements listed above, please submit your application through the Google Form link provided below https://forms.gle/7jAzBThbuxamFw3r8 Please note that any applications submitted through the portal will not be accepted or processed. If your application meets our required parameters, we will send you an interview invitation for the first round, scheduled on July 18, 2025 via Google Meet . Upon successful completion of the first round, there will be a second round. Details for this round will be shared shortly if you are selected in the first round.
Posted 3 weeks ago
8.0 - 12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Company: Our Client is a leading Indian multinational IT services and consulting firm. It provides digital transformation, cloud computing, data analytics, enterprise application integration, infrastructure management, and application development services. The company caters to over 700 clients across industries such as banking and financial services, manufacturing, technology, media, retail, and travel & hospitality. Its industry-specific solutions are designed to address complex business challenges by combining domain expertise with deep technical capabilities. With a global workforce of over 80,000 professionals and a presence in more than 50 countries. Job Title: Java Developer Locations: PAN INDIA Experience: 8 - 12 Years (Relevant) Employment Type: Contract to Hire Work Mode: Work From Office Notice Period: Immediate to 15 Days JOB DESCRIPTION: 1. Core Python and Java Proficiency Code fluency and readability Writes clean, maintainable, and modular code using Pythonic principles. Data structures & algorithms Understands and applies lists, dictionaries, sets, trees, and algorithms effectively. Object-Oriented Programming Demonstrates solid grasp of OOP concepts including inheritance, polymorphism, and encapsulation. Testing & debugging Uses tools like pytest, unittest, and debugging tools effectively in development. API development 2. System Design & Software Architecture Aptitude Modular system design Can design scalable, loosely coupled systems. Understanding of microservices Has working knowledge of service decomposition, inter-service communication, and orchestration. Data flow and integration Understands how data moves through systems, including ETL, messaging, and APIs. Design patterns Applies appropriate software design patterns and understands trade-offs. Cloud & infrastructure awareness Familiar with deploying services to AWS, GCP, or Azure, including basics like containers, autoscaling, and monitoring. 3. Architectural Growth Potential Big-picture thinking Thinks beyond code to how components fit into the whole architecture. Decision-making tradeoffs Can articulate the pros and cons of different tech decisions. Documentation & diagrams Comfortable creating architecture diagrams and explaining decisions. Curiosity & learning Eagerness to learn new tools, frameworks, and architectural patterns. Cross-team collaboration Works well across development, QA, DevOps, and business teams. 4. Communication & Leadership Explains technical ideas clearly Can break down complex ideas for both technical and non-technical stakeholders. Mentorship potential Willing to guide junior developers and share knowledge. Feedback receptiveness Accepts and integrates feedback well. Proactive problem solver Identifies issues and suggests architectural improvements without being asked. 5. Culture & Ownership Takes initiative Self-starter who takes ownership of deliverables. Reliability Delivers consistent, high-quality work. Team fit Aligns with team values, communication style, and pace. Growth mindset Shows desire to evolve into a broader architectural role over time.
Posted 3 weeks ago
9.0 years
3 - 5 Lacs
Cochin
On-site
Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Location:: Kochi / Pune C1 - 9- 12years CBR - 200K Passport mandatory ͏ Key Roles and responsibilities Thorough understanding of OCI cloud concepts, environment, services Good Hands-on with OCI architecture and design Implementation of OCI IaaS and PaaS services Conducts business process analysis/design, needs assessments and cost benefit analysis related to the impact on the business. Understanding business needs, translating those needs into system requirements and architecture aligned with the scope of the solution. Technical skills: Hands-on administration skills on the OCI Cloud environment. Good Understanding of OCI cloud, network operations, private and hybrid cloud administration Good Understanding of IaaS, PaaS, SaaS and Cloud design Expertise in designing and planning cloud environment in enterprise environment including application dependencies, client presentation mechanism, network connectivity and overall virtualization strategies Good Understanding of virtualization management and configuration - Knowledge of Autoscaling concepts (scale up and scale down) of VM, VM upgrades, configure availability domains/fault domains - Building a technical and security infrastructure in OCI cloud for selected apps/workloads Understanding of OCI Services - VCN, Subnets, Route tables, Dynamic Routing gateway, Service gateway, Security lists, NSG, Load Balancer, Storage buckets, Logging, Auditing, Monitoring, provisioning, security services(Cloud Gaurd, Network Firewall), IAM Understanding and ability to promptly diagnose and remedy cloud related problems and failures - Hands-on experience with OCI backend infrastructure, troubleshooting, and root cause analysis Manage server, build commission and decommissions processes Logging & monitoring for IaaS/PaaS resources FastConnect setup and traffic flows experience with IPSEC/VPN tunneling working knowledge Knowledge of VCN peering and managing the Dynamic route gateway, security lists on OCI. Implement and maintain all OCI infrastructure and services – VM's, OCI functions, Monitoring, Notifications Experienced in Deploying OCI VM's and managing the cloud workloads through OS management HUB Experienced in implementing DR drills on regular basis as per the need or request Mandatory Skills (Must Have)Primary skills: OCI Certification: Oracle Cloud Infrastructure Architect - Associate/Professional Secondary Skills at least L2 or L2+ (Good to have) Knowledge on other Cloud - AWS/Azure Knowledge on Infrastructure as Code (IAC) like Terraform Knowledge of any of the tools like Servicenow, BMC Helix, Ansible, Jenkins, Splunk Cloud automation using Python and Powershell scripts Knowledge on Devops, Kubernetes Behavioral Skill (Must have): Good Communication Skill - effective written and oral Lead the team of juior architects Eagerness to learn new cloud services and technology Team Collaboration Creative thinking in implementing new solutions ͏ 2. Skill upgradation and competency building Clear wipro exams and internal certifications from time to time to upgrade the skills Attend trainings, seminars to sharpen the knowledge in functional/ technical domain Write papers, articles, case studies and publish them on the intranet ͏ Deliver No. Performance Parameter Measure 1. Contribution to customer projects Quality, SLA, ETA, no. of tickets resolved, problem solved, # of change requests implemented, zero customer escalation, CSAT 2. Automation Process optimization, reduction in process/ steps, reduction in no. of tickets raised 3. Skill upgradation # of trainings & certifications completed, # of papers, articles written in a quarter ͏ Mandatory Skills: Oracle Database Admin. Experience: 5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 3 weeks ago
4.0 years
0 Lacs
Calcutta
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. In technology delivery at PwC, you will focus on implementing and delivering innovative technology solutions to clients, enabling seamless integration and efficient project execution. You will manage the end-to-end delivery process and collaborate with cross-functional teams to drive successful technology implementations. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Responsibilities: Deploy and maintain critical applications on cloud-native microservices architecture Implement automation, effective monitoring, and infrastructure-as-code Deploy and maintain CI/CD pipelines across multiple environments Design and implement secure automation solutions for development, testing, and production environments Build and deploy automation, monitoring, and analysis solutions Manage continuous integration and delivery pipeline to maximize efficiency Develop and maintain solutions for operational administration, system/data backup, disaster recovery, and security/performance monitoring Skills: Automating repetitive tasks using scripting (e.g. Bash, Python, Powershell, YAML, etc) Practical experience with Docker containerization and clustering (Kubernetes/ECS/AKS) Expertise with Azure Cloud Platform (e.g. ARM, App Service and Functions, Autoscaling, Load balancing etc.) Version control system experience (e.g. Git) Experience implementing CI/CD (e.g. Azure DevOps, Jenkins, etc.,) Experience with configuration management tools (e.g. Ansible, Chef) Experience with infrastructure-as-code (e.g. Terraform, Cloudformation) Preferred Knowledge apart from above : Good communication skills Secure, scale, and manage Linux virtual environments, on-prem Windows Server environments Certifications/Credentials: Certification in Windows Admin/Azure DevOps/Kubernetes Mandatory skill set: Devops Preferred skill set: Devops Years of experience required: 4-6 Years Education qualification: Bachelor's degree in Computer Science, IT, or a related field. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Bachelor of Technology Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills DevOps Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Client Management, Communication, Creativity, Deliverable Planning, Delivery Management, Developing User Stories, Embracing Change, Emotional Regulation, Empathy, Inclusion, Intellectual Curiosity, IT Business Strategy, IT Consulting, IT Infrastructure, IT Service Management (ITSM), IT Systems Development, Leading Design Workshops, Learning Agility, Market Research, Optimism, Process Improvement {+ 22 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 3 weeks ago
2.0 - 4.0 years
1 - 2 Lacs
Calcutta
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. In technology delivery at PwC, you will focus on implementing and delivering innovative technology solutions to clients, enabling seamless integration and efficient project execution. You will manage the end-to-end delivery process and collaborate with cross-functional teams to drive successful technology implementations. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Deploy and maintain critical applications on cloud-native microservices architecture Implement automation, effective monitoring, and infrastructure-as-code Deploy and maintain CI/CD pipelines across multiple environments Design and implement secure automation solutions for development, testing, and production environments Build and deploy automation, monitoring, and analysis solutions Manage continuous integration and delivery pipeline to maximize efficiency Develop and maintain solutions for operational administration, system/data backup, disaster recovery, and security/performance monitoring Mandatory Skill Sets:: Automating repetitive tasks using scripting (e.g. Bash, Python, Powershell, YAML, etc) Practical experience with Docker containerization and clustering (Kubernetes/ECS/AKS) Expertise with Azure Cloud Platform (e.g. ARM, App Service and Functions, Autoscaling, Load balancing etc.) Version control system experience (e.g. Git) Experience implementing CI/CD (e.g. Azure DevOps, Jenkins, etc.,) Experience with configuration management tools (e.g. Ansible, Chef) Experience with infrastructure-as-code (e.g. Terraform, Cloudformation) Preferred Skill Sets: Good communication skills Secure, scale, and manage Linux virtual environments, on-prem Windows Server environments Certifications/CredentialsCertification in Windows Admin/Azure DevOps/Kubernetes Years of experience required: 2 -4 Years Education qualification Bachelor's degree in Computer Science, IT, or a related field. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Azure Devops, Linux Bash, Microsoft PowerShell, Python (Programming Language) Optional Skills Linux Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 3 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Title: DevOps & Linux Infrastructure Engineer Location: Chennai Duration: Full Time This role involves performing advanced Linux server (preferably RedHat) debugging and troubleshooting, including kernel-level analysis and system performance tuning. The engineer will lead and support server upgrades, patching, and routine maintenance activities with minimal to no business disruption. They will manage and support DevOps platforms including Jira, Confluence, Jenkins, JFrog Artifactory, SonarQube, and Crowd. The position also requires leading and coordinating infrastructure validation and application server migrations across RedHat/Ubuntu platforms from legacy to current versions. A critical function is to rapidly troubleshoot and restore application services during downtime or system outages, while maintaining compliance with SLAs. The engineer will support and maintain multi-node architectures, including load balancing and auto-scaling capabilities, to ensure system scalability and performance. Administering and configuring GitHub Enterprise (GHE) is also a key responsibility, with emphasis on source control management (SCM) and access configuration. Additionally, the candidate will validate hardware and software configurations of newly provisioned servers to align with performance and security requirements. Collaboration across teams is necessary to implement and track Change Management processes, ensuring alignment of all stakeholders ahead of scheduled implementations. Active participation in incident response, root cause analysis, and permanent resolution efforts is expected. The engineer will also work with CI/CD tools (preferably Jenkins) to manage pipeline configurations and support application release cycles. Required Skills & Qualifications: A Bachelors Degree in Computer Science, Information Technology, or a related field is required. The candidate should have at least 5 years of hands-on experience in Linux system administration, with a focus on RedHat or Ubuntu platforms, and 3 or more years of proven experience in DevOps tools administration (e.g., Jira, Jenkins, Artifactory, SonarQube). A solid understanding of server infrastructure, including virtualization, system performance, and security best practices, is essential. The role demands strong expertise in shell scripting and Linux command-line utilities, familiarity with GitHub Enterprise and enterprise-grade source code management workflows, and experience executing zero-downtime migrations and system upgrades in production environments. Knowledge of load balancing, high availability architectures, and autoscaling strategies is important, along with an excellent understanding of change management and incident response in enterprise environments. Finally, the candidate should have experience configuring and managing CI/CD pipelines using Jenkins or similar tools.
Posted 3 weeks ago
4.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. In technology delivery at PwC, you will focus on implementing and delivering innovative technology solutions to clients, enabling seamless integration and efficient project execution. You will manage the end-to-end delivery process and collaborate with cross-functional teams to drive successful technology implementations. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Responsibilities: Deploy and maintain critical applications on cloud-native microservices architecture Implement automation, effective monitoring, and infrastructure-as-code Deploy and maintain CI/CD pipelines across multiple environments Design and implement secure automation solutions for development, testing, and production environments Build and deploy automation, monitoring, and analysis solutions Manage continuous integration and delivery pipeline to maximize efficiency Develop and maintain solutions for operational administration, system/data backup, disaster recovery, and security/performance monitoring Skills: Automating repetitive tasks using scripting (e.g. Bash, Python, Powershell, YAML, etc) Practical experience with Docker containerization and clustering (Kubernetes/ECS/AKS) Expertise with Azure Cloud Platform (e.g. ARM, App Service and Functions, Autoscaling, Load balancing etc.) Version control system experience (e.g. Git) Experience implementing CI/CD (e.g. Azure DevOps, Jenkins, etc.,) Experience with configuration management tools (e.g. Ansible, Chef) Experience with infrastructure-as-code (e.g. Terraform, Cloudformation) Preferred Knowledge apart from above : Good communication skills Secure, scale, and manage Linux virtual environments, on-prem Windows Server environments Certifications/Credentials: Certification in Windows Admin/Azure DevOps/Kubernetes Mandatory skill set: Devops Preferred skill set: Devops Years of experience required: 4-6 Years Education qualification: Bachelor's degree in Computer Science, IT, or a related field. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Bachelor of Technology Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills DevOps Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Client Management, Communication, Creativity, Deliverable Planning, Delivery Management, Developing User Stories, Embracing Change, Emotional Regulation, Empathy, Inclusion, Intellectual Curiosity, IT Business Strategy, IT Consulting, IT Infrastructure, IT Service Management (ITSM), IT Systems Development, Leading Design Workshops, Learning Agility, Market Research, Optimism, Process Improvement {+ 22 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 3 weeks ago
2.0 - 4.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. In technology delivery at PwC, you will focus on implementing and delivering innovative technology solutions to clients, enabling seamless integration and efficient project execution. You will manage the end-to-end delivery process and collaborate with cross-functional teams to drive successful technology implementations. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Deploy and maintain critical applications on cloud-native microservices architecture Implement automation, effective monitoring, and infrastructure-as-code Deploy and maintain CI/CD pipelines across multiple environments Design and implement secure automation solutions for development, testing, and production environments Build and deploy automation, monitoring, and analysis solutions Manage continuous integration and delivery pipeline to maximize efficiency Develop and maintain solutions for operational administration, system/data backup, disaster recovery, and security/performance monitoring Mandatory Skill Sets:: Automating repetitive tasks using scripting (e.g. Bash, Python, Powershell, YAML, etc) Practical experience with Docker containerization and clustering (Kubernetes/ECS/AKS) Expertise with Azure Cloud Platform (e.g. ARM, App Service and Functions, Autoscaling, Load balancing etc.) Version control system experience (e.g. Git) Experience implementing CI/CD (e.g. Azure DevOps, Jenkins, etc.,) Experience with configuration management tools (e.g. Ansible, Chef) Experience with infrastructure-as-code (e.g. Terraform, Cloudformation) Preferred Skill Sets: Good communication skills Secure, scale, and manage Linux virtual environments, on-prem Windows Server environments Certifications/CredentialsCertification in Windows Admin/Azure DevOps/Kubernetes Years of experience required: 2 -4 Years Education qualification Bachelor's degree in Computer Science, IT, or a related field. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Azure Devops, Linux Bash, Microsoft PowerShell, Python (Programming Language) Optional Skills Linux Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 3 weeks ago
0 years
0 Lacs
India
On-site
Company Description Evallo is a leading provider of a comprehensive SaaS platform for tutors and tutoring businesses, revolutionizing education management. With features like advanced CRM, profile management, standardized test prep, automatic grading, and insightful dashboards, we empower educators to focus on teaching. We're dedicated to pushing the boundaries of ed-tech and redefining efficient education management. Why this role matters Evallo is scaling from a focused tutoring platform to a modular operating system for all service businesses that bill by the hour. As we add payroll, proposals, white-boarding, and AI tooling, we need a Solution Architect who can translate product vision into a robust, extensible technical blueprint. You’ll be the critical bridge between product, engineering, and customers—owning architecture decisions that keep us reliable at 5k+ concurrent users and cost-efficient at 100k+ total users. Outcomes we expect Map current backend + frontend, flag structural debt, and publish an Architecture Gap Report Define naming & layering conventions, linter / formatter rules, and a lightweight ADR process Ship reference architecture for new modules Lead cross-team design reviews; no major feature ships without architecture sign-off Eventual goal is to have Evallo run in a fully observable, autoscaling environments with < 10 % infra cost waste. Monitoring dashboards should trigger < 5 false positives per month. Day-to-day Solution Design: Break down product epics into service contracts, data flows, and sequence diagrams. Choose the right patterns—monolith vs. microservice, event vs. REST, cache vs. DB index—based on cost, team maturity, and scale targets. Platform-wide Standards: Codify review checklists (security, performance, observability) and enforce via GitHub templates and CI gates. Champion a shift-left testing mindset; critical paths reach 80 % automated coverage before QA touches them. Scalability & Cost Optimization: Design load-testing scenarios that mimic 5 k concurrent tutoring sessions; guide DevOps on autoscaling policies and CDN strategy. Audit infra spend monthly; recommend serverless, queuing, or data-tier changes to cut waste. Release & Environment Strategy: Maintain clear promotion paths: local → sandbox → staging → prod with one-click rollback. Own schema-migration playbooks; zero-downtime releases are the default, not the exception. Technical Mentorship: Run fortnightly architecture clinics; level-up engineers on domain-driven design and performance profiling. Act as tie-breaker on competing technical proposals, keeping debates respectful and evidence-based. Qualifications 5+ yrs engineering experience, 2+ yrs in a dedicated architecture or staff-level role on a high-traffic SaaS product. Proven track record designing multi-tenant systems that scaled beyond 50 k users or 1k RPM. Deep knowledge of Node.js / TypeScript (our core stack), MongoDB or similar NoSQL, plus comfort with event brokers (Kafka, NATS, or RabbitMQ). Fluency in AWS (preferred) or GCP primitives—EKS, Lambda, RDS, CloudFront, IAM. Hands-on with observability stacks (Datadog, New Relic, Sentry, or OpenTelemetry). Excellent written communication; you can distill technical trade-offs in one page for execs and in one diagram for engineers.
Posted 3 weeks ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Experience : 5+ Years Work Mode : Work from office only Job Description: 1. AWS Cloud Infrastructure Design, deploy, and manage scalable, secure, and highly available systems on AWS. Optimize cloud costs, enforce tagging, and implement security best practices (IAM, VPC, GuardDuty, etc.). Automate infrastructure provisioning using Terraform or AWS CDK. Ensure backup, disaster recovery, and high availability (HA) strategies are in place. 2. Kubernetes (EKS preferred) Manage and scale Kubernetes clusters (preferably Amazon EKS). Implement CI/CD pipelines with GitOps (e.g., ArgoCD or Flux) or traditional tools (e.g., Jenkins, GitLab). Enforce RBAC policies, namespaces isolation, and pod security policies. Monitor cluster health, optimize pod scheduling, autoscaling, and resource limits/requests. 3. Monitoring and Observability (Datadog) Build and maintain Datadog dashboards for real-time visibility across systems and services. Set up alerting policies, SLOs, SLIs, and incident response workflows. Integrate Datadog with AWS, Kubernetes, and applications for full-stack observability. Conduct post-incident reviews using Datadog analytics to reduce MTTR. 4. Automation and DevOps Automate manual processes (e.g., server setup, patching, scaling) using Python, Bash, or Ansible. Maintain and improve CI/CD pipelines (Jenkins) for faster and more reliable deployments. Drive Infrastructure-as-Code (IaC) practices using Terraform to manage cloud resources. Promote GitOps and version-controlled deployments. 5. Linux Systems Administration Administer Linux servers (Ubuntu, RHEL, Amazon Linux) for stability and performance. Harden OS security, configure SELinux, firewalls, and ensure timely patching. Troubleshoot system-level issues: disk, memory, network, and processes. Optimize system performance using tools like top, htop, iotop, netstat, etc.
Posted 3 weeks ago
3.0 years
8 - 30 Lacs
Chennai, Tamil Nadu, India
On-site
Industry & Sector: A fast-scaling provider of analytics & data engineering services within the enterprise software and digital transformation sector in India seeks an onsite PySpark Engineer to build and optimize high-volume data pipelines on modern big-data platforms. Role & Responsibilities Design, develop, and maintain PySpark-based batch and streaming pipelines for data ingestion, cleansing, transformation, and aggregation. Optimize Spark jobs for performance and cost, tuning partitions, caching strategies, and join execution plans. Integrate diverse data sources—RDBMS, NoSQL, cloud storage, and REST APIs—into unified, consumable datasets for analytics and reporting teams. Implement robust data quality, error-handling, and lineage tracking using Spark SQL, Delta Lake, and metadata tools. Collaborate with Data Architects and BI teams to translate analytical requirements into scalable data models. Follow Agile delivery practices, write unit and integration tests, and automate deployments through Git-driven CI/CD pipelines. Skills & Qualifications Must-Have 3+ years hands-on PySpark development in production environments. Deep knowledge of Spark SQL, DataFrames, RDD optimizations, and performance tuning. Proficiency in Python 3, object-oriented design, and writing reusable modules. Experience with Hadoop ecosystem, Hive/Impala, and cloud object storage such as S3, ADLS, or GCS. Strong SQL skills and understanding of star/snowflake schema modeling. Preferred Exposure to Delta Lake, Apache Airflow, or Kafka for orchestration and streaming. Experience deploying on Databricks or EMR and configuring autoscaling clusters. Knowledge of Docker or Kubernetes for containerized data workloads. Benefits & Culture Highlights Hands-on work with modern open-source tech stacks and leading cloud platforms. Mentorship from senior data engineers and architects, fostering rapid skill growth. Performance-based bonuses, skill-development stipends, and a collaborative, innovation-driven environment. Skills: sql,hadoop ecosystem,pyspark engineer,scala,python 3,performance tuning,problem solving,apache airflow,hive,emr,kubernetes,agile,impala,pyspark,dataframes,delta lake,rdd optimizations,object-oriented design,python,spark,data modeling,databricks,spark sql,docker,hadoop,etl,kafka
Posted 3 weeks ago
3.0 years
8 - 30 Lacs
Bhubaneswar, Odisha, India
On-site
Industry & Sector: A fast-scaling provider of analytics & data engineering services within the enterprise software and digital transformation sector in India seeks an onsite PySpark Engineer to build and optimize high-volume data pipelines on modern big-data platforms. Role & Responsibilities Design, develop, and maintain PySpark-based batch and streaming pipelines for data ingestion, cleansing, transformation, and aggregation. Optimize Spark jobs for performance and cost, tuning partitions, caching strategies, and join execution plans. Integrate diverse data sources—RDBMS, NoSQL, cloud storage, and REST APIs—into unified, consumable datasets for analytics and reporting teams. Implement robust data quality, error-handling, and lineage tracking using Spark SQL, Delta Lake, and metadata tools. Collaborate with Data Architects and BI teams to translate analytical requirements into scalable data models. Follow Agile delivery practices, write unit and integration tests, and automate deployments through Git-driven CI/CD pipelines. Skills & Qualifications Must-Have 3+ years hands-on PySpark development in production environments. Deep knowledge of Spark SQL, DataFrames, RDD optimizations, and performance tuning. Proficiency in Python 3, object-oriented design, and writing reusable modules. Experience with Hadoop ecosystem, Hive/Impala, and cloud object storage such as S3, ADLS, or GCS. Strong SQL skills and understanding of star/snowflake schema modeling. Preferred Exposure to Delta Lake, Apache Airflow, or Kafka for orchestration and streaming. Experience deploying on Databricks or EMR and configuring autoscaling clusters. Knowledge of Docker or Kubernetes for containerized data workloads. Benefits & Culture Highlights Hands-on work with modern open-source tech stacks and leading cloud platforms. Mentorship from senior data engineers and architects, fostering rapid skill growth. Performance-based bonuses, skill-development stipends, and a collaborative, innovation-driven environment. Skills: sql,hadoop ecosystem,pyspark engineer,scala,python 3,performance tuning,problem solving,apache airflow,hive,emr,kubernetes,agile,impala,pyspark,dataframes,delta lake,rdd optimizations,object-oriented design,python,spark,data modeling,databricks,spark sql,docker,hadoop,etl,kafka
Posted 3 weeks ago
3.0 years
8 - 30 Lacs
Bengaluru, Karnataka, India
On-site
Industry & Sector: A fast-scaling provider of analytics & data engineering services within the enterprise software and digital transformation sector in India seeks an onsite PySpark Engineer to build and optimize high-volume data pipelines on modern big-data platforms. Role & Responsibilities Design, develop, and maintain PySpark-based batch and streaming pipelines for data ingestion, cleansing, transformation, and aggregation. Optimize Spark jobs for performance and cost, tuning partitions, caching strategies, and join execution plans. Integrate diverse data sources—RDBMS, NoSQL, cloud storage, and REST APIs—into unified, consumable datasets for analytics and reporting teams. Implement robust data quality, error-handling, and lineage tracking using Spark SQL, Delta Lake, and metadata tools. Collaborate with Data Architects and BI teams to translate analytical requirements into scalable data models. Follow Agile delivery practices, write unit and integration tests, and automate deployments through Git-driven CI/CD pipelines. Skills & Qualifications Must-Have 3+ years hands-on PySpark development in production environments. Deep knowledge of Spark SQL, DataFrames, RDD optimizations, and performance tuning. Proficiency in Python 3, object-oriented design, and writing reusable modules. Experience with Hadoop ecosystem, Hive/Impala, and cloud object storage such as S3, ADLS, or GCS. Strong SQL skills and understanding of star/snowflake schema modeling. Preferred Exposure to Delta Lake, Apache Airflow, or Kafka for orchestration and streaming. Experience deploying on Databricks or EMR and configuring autoscaling clusters. Knowledge of Docker or Kubernetes for containerized data workloads. Benefits & Culture Highlights Hands-on work with modern open-source tech stacks and leading cloud platforms. Mentorship from senior data engineers and architects, fostering rapid skill growth. Performance-based bonuses, skill-development stipends, and a collaborative, innovation-driven environment. Skills: sql,hadoop ecosystem,pyspark engineer,scala,python 3,performance tuning,problem solving,apache airflow,hive,emr,kubernetes,agile,impala,pyspark,dataframes,delta lake,rdd optimizations,object-oriented design,python,spark,data modeling,databricks,spark sql,docker,hadoop,etl,kafka
Posted 3 weeks ago
3.0 years
8 - 30 Lacs
Hyderabad, Telangana, India
On-site
Industry & Sector: A fast-scaling provider of analytics & data engineering services within the enterprise software and digital transformation sector in India seeks an onsite PySpark Engineer to build and optimize high-volume data pipelines on modern big-data platforms. Role & Responsibilities Design, develop, and maintain PySpark-based batch and streaming pipelines for data ingestion, cleansing, transformation, and aggregation. Optimize Spark jobs for performance and cost, tuning partitions, caching strategies, and join execution plans. Integrate diverse data sources—RDBMS, NoSQL, cloud storage, and REST APIs—into unified, consumable datasets for analytics and reporting teams. Implement robust data quality, error-handling, and lineage tracking using Spark SQL, Delta Lake, and metadata tools. Collaborate with Data Architects and BI teams to translate analytical requirements into scalable data models. Follow Agile delivery practices, write unit and integration tests, and automate deployments through Git-driven CI/CD pipelines. Skills & Qualifications Must-Have 3+ years hands-on PySpark development in production environments. Deep knowledge of Spark SQL, DataFrames, RDD optimizations, and performance tuning. Proficiency in Python 3, object-oriented design, and writing reusable modules. Experience with Hadoop ecosystem, Hive/Impala, and cloud object storage such as S3, ADLS, or GCS. Strong SQL skills and understanding of star/snowflake schema modeling. Preferred Exposure to Delta Lake, Apache Airflow, or Kafka for orchestration and streaming. Experience deploying on Databricks or EMR and configuring autoscaling clusters. Knowledge of Docker or Kubernetes for containerized data workloads. Benefits & Culture Highlights Hands-on work with modern open-source tech stacks and leading cloud platforms. Mentorship from senior data engineers and architects, fostering rapid skill growth. Performance-based bonuses, skill-development stipends, and a collaborative, innovation-driven environment. Skills: sql,hadoop ecosystem,pyspark engineer,scala,python 3,performance tuning,problem solving,apache airflow,hive,emr,kubernetes,agile,impala,pyspark,dataframes,delta lake,rdd optimizations,object-oriented design,python,spark,data modeling,databricks,spark sql,docker,hadoop,etl,kafka
Posted 3 weeks ago
3.0 years
8 - 30 Lacs
Kochi, Kerala, India
On-site
Industry & Sector: A fast-scaling provider of analytics & data engineering services within the enterprise software and digital transformation sector in India seeks an onsite PySpark Engineer to build and optimize high-volume data pipelines on modern big-data platforms. Role & Responsibilities Design, develop, and maintain PySpark-based batch and streaming pipelines for data ingestion, cleansing, transformation, and aggregation. Optimize Spark jobs for performance and cost, tuning partitions, caching strategies, and join execution plans. Integrate diverse data sources—RDBMS, NoSQL, cloud storage, and REST APIs—into unified, consumable datasets for analytics and reporting teams. Implement robust data quality, error-handling, and lineage tracking using Spark SQL, Delta Lake, and metadata tools. Collaborate with Data Architects and BI teams to translate analytical requirements into scalable data models. Follow Agile delivery practices, write unit and integration tests, and automate deployments through Git-driven CI/CD pipelines. Skills & Qualifications Must-Have 3+ years hands-on PySpark development in production environments. Deep knowledge of Spark SQL, DataFrames, RDD optimizations, and performance tuning. Proficiency in Python 3, object-oriented design, and writing reusable modules. Experience with Hadoop ecosystem, Hive/Impala, and cloud object storage such as S3, ADLS, or GCS. Strong SQL skills and understanding of star/snowflake schema modeling. Preferred Exposure to Delta Lake, Apache Airflow, or Kafka for orchestration and streaming. Experience deploying on Databricks or EMR and configuring autoscaling clusters. Knowledge of Docker or Kubernetes for containerized data workloads. Benefits & Culture Highlights Hands-on work with modern open-source tech stacks and leading cloud platforms. Mentorship from senior data engineers and architects, fostering rapid skill growth. Performance-based bonuses, skill-development stipends, and a collaborative, innovation-driven environment. Skills: sql,hadoop ecosystem,pyspark engineer,scala,python 3,performance tuning,problem solving,apache airflow,hive,emr,kubernetes,agile,impala,pyspark,dataframes,delta lake,rdd optimizations,object-oriented design,python,spark,data modeling,databricks,spark sql,docker,hadoop,etl,kafka
Posted 3 weeks ago
3.0 years
8 - 30 Lacs
Pune, Maharashtra, India
On-site
Industry & Sector: A fast-scaling provider of analytics & data engineering services within the enterprise software and digital transformation sector in India seeks an onsite PySpark Engineer to build and optimize high-volume data pipelines on modern big-data platforms. Role & Responsibilities Design, develop, and maintain PySpark-based batch and streaming pipelines for data ingestion, cleansing, transformation, and aggregation. Optimize Spark jobs for performance and cost, tuning partitions, caching strategies, and join execution plans. Integrate diverse data sources—RDBMS, NoSQL, cloud storage, and REST APIs—into unified, consumable datasets for analytics and reporting teams. Implement robust data quality, error-handling, and lineage tracking using Spark SQL, Delta Lake, and metadata tools. Collaborate with Data Architects and BI teams to translate analytical requirements into scalable data models. Follow Agile delivery practices, write unit and integration tests, and automate deployments through Git-driven CI/CD pipelines. Skills & Qualifications Must-Have 3+ years hands-on PySpark development in production environments. Deep knowledge of Spark SQL, DataFrames, RDD optimizations, and performance tuning. Proficiency in Python 3, object-oriented design, and writing reusable modules. Experience with Hadoop ecosystem, Hive/Impala, and cloud object storage such as S3, ADLS, or GCS. Strong SQL skills and understanding of star/snowflake schema modeling. Preferred Exposure to Delta Lake, Apache Airflow, or Kafka for orchestration and streaming. Experience deploying on Databricks or EMR and configuring autoscaling clusters. Knowledge of Docker or Kubernetes for containerized data workloads. Benefits & Culture Highlights Hands-on work with modern open-source tech stacks and leading cloud platforms. Mentorship from senior data engineers and architects, fostering rapid skill growth. Performance-based bonuses, skill-development stipends, and a collaborative, innovation-driven environment. Skills: sql,hadoop ecosystem,pyspark engineer,scala,python 3,performance tuning,problem solving,apache airflow,hive,emr,kubernetes,agile,impala,pyspark,dataframes,delta lake,rdd optimizations,object-oriented design,python,spark,data modeling,databricks,spark sql,docker,hadoop,etl,kafka
Posted 3 weeks ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description AWS Cloud Infrastructure : Design, deploy, and manage scalable, secure, and highly available systems on AWS. Optimize cloud costs, enforce tagging, and implement security best practices (IAM, VPC, GuardDuty, etc.). Automate infrastructure provisioning using Terraform or AWS CDK. Ensure backup, disaster recovery, and high availability (HA) strategies are in place. Kubernetes (EKS preferred) : Manage and scale Kubernetes clusters (preferably Amazon EKS). Implement CI/CD pipelines with GitOps (e.g., ArgoCD or Flux) or traditional tools (e.g., Jenkins, GitLab). Enforce RBAC policies, namespaces isolation, and pod security policies. Monitor cluster health, optimize pod scheduling, autoscaling, and resource limits/requests. Monitoring and Observability (Datadog) : Build and maintain Datadog dashboards for real-time visibility across systems and services. Set up alerting policies, SLOs, SLIs, and incident response workflows. Integrate Datadog with AWS, Kubernetes, and applications for full-stack observability. Conduct post-incident reviews using Datadog analytics to reduce MTTR. Automation and DevOps : Automate manual processes (e.g., server setup, patching, scaling) using Python, Bash, or Ansible. Maintain and improve CI/CD pipelines (Jenkins) for faster and more reliable deployments. Drive Infrastructure-as-Code (IaC) practices using Terraform to manage cloud resources. Promote GitOps and version-controlled deployments. Linux Systems Administration : Administer Linux servers (Ubuntu, RHEL, Amazon Linux) for stability and performance. Harden OS security, configure SELinux, firewalls, and ensure timely patching. Troubleshoot system-level issues: disk, memory, network, and processes. Optimize system performance using tools like top, htop, iotop, netstat, etc. (ref:hirist.tech)
Posted 4 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough