Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Amritsar, Punjab, India
Remote
Experience : 4.00 + years Salary : INR 2100000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Adfolks LLC- A ZainTECH Company) (*Note: This is a requirement for one of Uplers' client - Adfolks LLC- A ZainTECH Company) What do you need for this opportunity? Must have skills required: Elk Stack, Grafana, OpenShift, Prometheus, Rancher, DevOps, Terraform, AWS, Azure, Kubernetes, Linux Adfolks LLC- A ZainTECH Company is Looking for: Adfolks is seeking Cloud Engineer who can join immediately to work in a high visibility, technically interesting project. Location: Remote About Our Company Adfolks LLC which is a Dubai based technology services company for 7 years now with key focus areas in DataScience & Engineering, Cloud Services, Application Modernization, and Cyber Security in the Middle East region. We are an Advanced Consulting Partner with AWS, Microsoft Azure Gold Partner and Google Cloud Partner and, we are the only KCSP [Kubernetes Certified Service Provider] in the region. Visit our website https://adfolks.com/ to know more. Job Description Experience: 4+ Years Summary We are looking for a passionate, innovate professional to join our cloud services team. You’ll work in a collaborative and inclusive environment that values diverse perspectives and continuous learning and provides industry-leading benefits with unmatched opportunities for career growth. Key accountabilities include development and maintenance of cloud platforms, services and components to enable safe enterprise-wide use of cloud common functionality. Requirements Bachelor’s degree in Computer Science, related Engineering field, or equivalent experience 4+ years of experience in public cloud infrastructure, especially Azure and AWS. Good understanding of cloud infrastructure, and different deployment models Should be familiar with cloud networking and security solutions like load balancer, firewall, WAF, CSPM, security group, etc. Good understanding of identity and access management solutions like Active directory, Azure AD, conditional access, IAM and other vendor specific solutions Good understanding of Linux and windows based systems Understanding of SQL & NoSQL Databases including IAAS and PAAS models. Experience in policy management, governance, monitoring and alerts Knowledge in microservices, DevOps and IaC (Terraform and Ansible). Azure AZ-104 or AWS administrator certification would be an advantage Excellent communication and interpersonal skills Job responsibilities Assist application team to deploy various solutions in the cloud environment. Maintain infrastructure security and governance as per the client requirement and standards. Support other team members (database, network, security, etc.) to configure and maintain respective solution. Actively Involve in discussions related to new solution implementation, design creation and all other discussions related to cloud infrastructure. POC deployment, documentation, and technical presentation. Linux Hosting and Administration Install, configure, and maintain Linux servers, ensuring optimal performance and security. Handle Linux-based hosting solutions, including web servers, databases, and other services. Apply patches and updates to Linux servers as required, and automate routine tasks. Monitor system performance, troubleshoot issues, and conduct root cause analysis for any server downtime. Kubernetes Operations Deploy, manage, and maintain containerized applications using Kubernetes. Create and manage Kubernetes manifests, helm charts, and operators for complex application architectures. Scale applications based on resource utilization and requirements. Monitor the health and performance of Kubernetes clusters and take corrective actions as needed. DevOps Integration Implement and maintain CI/CD pipelines for automated testing and deployments. Assist in incorporating containerization and orchestration into the DevOps process. Rancher/OpenShift Expertise (Nice to Have) Experience in deploying and managing Kubernetes clusters using Rancher or OpenShift. Implement monitoring, logging, and auto-scaling solutions in Rancher or OpenShift environments. Application Support Gain a thorough understanding of the applications running within containers to provide first-level application support. Collaborate with development teams to debug application issues in staging and production environments. Azure Infrastructure Deploy and manage resources on Azure, including but not limited to VMs, databases, and Kubernetes clusters. Implement Infrastructure as Code practices using Azure Resource Manager (ARM) templates or terraform Monitoring and Alerting Using Open-Source Tools (Any one of the following) ELK Stack Implement and manage the ELK (Elasticsearch, Logstash, Kibana) stack for real-time log aggregation, monitoring, and analysis. Customize Kibana dashboards for different system metrics and logs to aid in quick issue resolution. Grafana Develop and maintain Grafana dashboards to visualize key performance indicators and system metrics. Integrate Grafana with other data sources and monitoring tools for comprehensive analytics. Loki Set up and manage Loki for aggregating and storing logs. Integrate Loki with Grafana for unified querying and visualization of metrics and logs. Prometheus Deploy and configure Prometheus for monitoring system and application metrics. Create custom Prometheus queries and alerts to catch anomalies and system performance issues. Mimir/Cortex (prefereable) Implement Mimir or Cortex for enhanced long-term storage and scalability of Prometheus metrics. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 months ago
0 years
0 Lacs
Ghaziabad, Uttar Pradesh, India
Remote
Experience : 4.00 + years Salary : INR 2100000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Adfolks LLC- A ZainTECH Company) (*Note: This is a requirement for one of Uplers' client - Adfolks LLC- A ZainTECH Company) What do you need for this opportunity? Must have skills required: Elk Stack, Grafana, OpenShift, Prometheus, Rancher, DevOps, Terraform, AWS, Azure, Kubernetes, Linux Adfolks LLC- A ZainTECH Company is Looking for: Adfolks is seeking Cloud Engineer who can join immediately to work in a high visibility, technically interesting project. Location: Remote About Our Company Adfolks LLC which is a Dubai based technology services company for 7 years now with key focus areas in DataScience & Engineering, Cloud Services, Application Modernization, and Cyber Security in the Middle East region. We are an Advanced Consulting Partner with AWS, Microsoft Azure Gold Partner and Google Cloud Partner and, we are the only KCSP [Kubernetes Certified Service Provider] in the region. Visit our website https://adfolks.com/ to know more. Job Description Experience: 4+ Years Summary We are looking for a passionate, innovate professional to join our cloud services team. You’ll work in a collaborative and inclusive environment that values diverse perspectives and continuous learning and provides industry-leading benefits with unmatched opportunities for career growth. Key accountabilities include development and maintenance of cloud platforms, services and components to enable safe enterprise-wide use of cloud common functionality. Requirements Bachelor’s degree in Computer Science, related Engineering field, or equivalent experience 4+ years of experience in public cloud infrastructure, especially Azure and AWS. Good understanding of cloud infrastructure, and different deployment models Should be familiar with cloud networking and security solutions like load balancer, firewall, WAF, CSPM, security group, etc. Good understanding of identity and access management solutions like Active directory, Azure AD, conditional access, IAM and other vendor specific solutions Good understanding of Linux and windows based systems Understanding of SQL & NoSQL Databases including IAAS and PAAS models. Experience in policy management, governance, monitoring and alerts Knowledge in microservices, DevOps and IaC (Terraform and Ansible). Azure AZ-104 or AWS administrator certification would be an advantage Excellent communication and interpersonal skills Job responsibilities Assist application team to deploy various solutions in the cloud environment. Maintain infrastructure security and governance as per the client requirement and standards. Support other team members (database, network, security, etc.) to configure and maintain respective solution. Actively Involve in discussions related to new solution implementation, design creation and all other discussions related to cloud infrastructure. POC deployment, documentation, and technical presentation. Linux Hosting and Administration Install, configure, and maintain Linux servers, ensuring optimal performance and security. Handle Linux-based hosting solutions, including web servers, databases, and other services. Apply patches and updates to Linux servers as required, and automate routine tasks. Monitor system performance, troubleshoot issues, and conduct root cause analysis for any server downtime. Kubernetes Operations Deploy, manage, and maintain containerized applications using Kubernetes. Create and manage Kubernetes manifests, helm charts, and operators for complex application architectures. Scale applications based on resource utilization and requirements. Monitor the health and performance of Kubernetes clusters and take corrective actions as needed. DevOps Integration Implement and maintain CI/CD pipelines for automated testing and deployments. Assist in incorporating containerization and orchestration into the DevOps process. Rancher/OpenShift Expertise (Nice to Have) Experience in deploying and managing Kubernetes clusters using Rancher or OpenShift. Implement monitoring, logging, and auto-scaling solutions in Rancher or OpenShift environments. Application Support Gain a thorough understanding of the applications running within containers to provide first-level application support. Collaborate with development teams to debug application issues in staging and production environments. Azure Infrastructure Deploy and manage resources on Azure, including but not limited to VMs, databases, and Kubernetes clusters. Implement Infrastructure as Code practices using Azure Resource Manager (ARM) templates or terraform Monitoring and Alerting Using Open-Source Tools (Any one of the following) ELK Stack Implement and manage the ELK (Elasticsearch, Logstash, Kibana) stack for real-time log aggregation, monitoring, and analysis. Customize Kibana dashboards for different system metrics and logs to aid in quick issue resolution. Grafana Develop and maintain Grafana dashboards to visualize key performance indicators and system metrics. Integrate Grafana with other data sources and monitoring tools for comprehensive analytics. Loki Set up and manage Loki for aggregating and storing logs. Integrate Loki with Grafana for unified querying and visualization of metrics and logs. Prometheus Deploy and configure Prometheus for monitoring system and application metrics. Create custom Prometheus queries and alerts to catch anomalies and system performance issues. Mimir/Cortex (prefereable) Implement Mimir or Cortex for enhanced long-term storage and scalability of Prometheus metrics. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 months ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Role Description Location : All UST Locations Experience Range : 5-8yrs Responsibilities Infrastructure as Code & Cloud Automation Design and implement Infrastructure as Code (IaC) using Terraform, Ansible, or equivalent for both Azure and on-prem environments. Automate provisioning and configuration management for Azure PaaS services (App Services, AKS, Storage, Key Vault, etc.). Manage Hybrid Cloud Deployments, ensuring seamless integration between Azure and on-prem alternatives. CI/CD Pipeline Development (Without Azure DevOps) Develop and maintain CI/CD pipelines using GitHub Actions or Jenkins. Automate containerized application deployment using Docker, Kubernetes (AKS). Implement canary deployments, blue-green deployments, and rollback strategies for production releases. Cloud Security & Secrets Management Implement role-based access control (RBAC) and IAM policies across cloud and on-prem environments. Secure API and infrastructure secrets using HashiCorp Vault (instead of Azure Key Vault). Monitoring, Logging & Observability Set up observability frameworks using Prometheus, Grafana, and ELK Stack (ElasticSearch, Kibana, Logstash). Implement centralized logging and monitoring across cloud and on-prem environments. Must Have Skills & Experience Cloud & DevOps Azure PaaS Services: App Services, AKS, Azure Functions, Blob Storage, Redis Cache Kubernetes & Containerization: Hands-on experience with AKS, Kubernetes, Docker CI/CD Tools: Experience with GitHub Actions, Jenkins Infrastructure as Code (IaC): Proficiency in Terraform Security & Compliance IAM & RBAC: Experience with Active Directory, Keycloak, LDAP Secrets Management: Expertise in HashiCorp Vault or Azure Key Vault Cloud Security Best Practices: API security, network security, encryption Networking & Hybrid Cloud Azure Networking: Knowledge of VNets, Private Endpoints, Load Balancers, API Gateway, Nginx Hybrid Cloud Connectivity: Experience with VPN Gateway, Private Peering Monitoring & Performance Optimization Observability tools: Prometheus, Grafana, ELK Stack, Azure Monitor & App Insights Logging & Monitoring: Experience with ElasticSearch, Logstash, OpenTelemetry, Log Analytics Good To Have Skills & Experience Experience with additional IaC tools (Ansible, Chef, Puppet) Experience with additional container orchestration platforms (OpenShift, Docker Swarm) Knowledge of advanced Azure services (e.g., Azure Logic Apps, Azure Event Grid) Familiarity with cloud-native monitoring solutions (e.g., CloudWatch, Datadog) Experience in implementing and managing multi-cloud environments Key Personal Attributes Strong problem-solving abilities Ability to work in a fast-paced and dynamic environment Excellent communication skills and ability to collaborate with cross-functional teams Proactive and self-motivated, with a strong sense of ownership and accountability. Skills Azure,Scripting,CI/CD Show more Show less
Posted 2 months ago
0 years
0 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 4.00 + years Salary : INR 2100000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Adfolks LLC- A ZainTECH Company) (*Note: This is a requirement for one of Uplers' client - Adfolks LLC- A ZainTECH Company) What do you need for this opportunity? Must have skills required: Elk Stack, Grafana, OpenShift, Prometheus, Rancher, DevOps, Terraform, AWS, Azure, Kubernetes, Linux Adfolks LLC- A ZainTECH Company is Looking for: Adfolks is seeking Cloud Engineer who can join immediately to work in a high visibility, technically interesting project. Location: Remote About Our Company Adfolks LLC which is a Dubai based technology services company for 7 years now with key focus areas in DataScience & Engineering, Cloud Services, Application Modernization, and Cyber Security in the Middle East region. We are an Advanced Consulting Partner with AWS, Microsoft Azure Gold Partner and Google Cloud Partner and, we are the only KCSP [Kubernetes Certified Service Provider] in the region. Visit our website https://adfolks.com/ to know more. Job Description Experience: 4+ Years Summary We are looking for a passionate, innovate professional to join our cloud services team. You’ll work in a collaborative and inclusive environment that values diverse perspectives and continuous learning and provides industry-leading benefits with unmatched opportunities for career growth. Key accountabilities include development and maintenance of cloud platforms, services and components to enable safe enterprise-wide use of cloud common functionality. Requirements Bachelor’s degree in Computer Science, related Engineering field, or equivalent experience 4+ years of experience in public cloud infrastructure, especially Azure and AWS. Good understanding of cloud infrastructure, and different deployment models Should be familiar with cloud networking and security solutions like load balancer, firewall, WAF, CSPM, security group, etc. Good understanding of identity and access management solutions like Active directory, Azure AD, conditional access, IAM and other vendor specific solutions Good understanding of Linux and windows based systems Understanding of SQL & NoSQL Databases including IAAS and PAAS models. Experience in policy management, governance, monitoring and alerts Knowledge in microservices, DevOps and IaC (Terraform and Ansible). Azure AZ-104 or AWS administrator certification would be an advantage Excellent communication and interpersonal skills Job responsibilities Assist application team to deploy various solutions in the cloud environment. Maintain infrastructure security and governance as per the client requirement and standards. Support other team members (database, network, security, etc.) to configure and maintain respective solution. Actively Involve in discussions related to new solution implementation, design creation and all other discussions related to cloud infrastructure. POC deployment, documentation, and technical presentation. Linux Hosting and Administration Install, configure, and maintain Linux servers, ensuring optimal performance and security. Handle Linux-based hosting solutions, including web servers, databases, and other services. Apply patches and updates to Linux servers as required, and automate routine tasks. Monitor system performance, troubleshoot issues, and conduct root cause analysis for any server downtime. Kubernetes Operations Deploy, manage, and maintain containerized applications using Kubernetes. Create and manage Kubernetes manifests, helm charts, and operators for complex application architectures. Scale applications based on resource utilization and requirements. Monitor the health and performance of Kubernetes clusters and take corrective actions as needed. DevOps Integration Implement and maintain CI/CD pipelines for automated testing and deployments. Assist in incorporating containerization and orchestration into the DevOps process. Rancher/OpenShift Expertise (Nice to Have) Experience in deploying and managing Kubernetes clusters using Rancher or OpenShift. Implement monitoring, logging, and auto-scaling solutions in Rancher or OpenShift environments. Application Support Gain a thorough understanding of the applications running within containers to provide first-level application support. Collaborate with development teams to debug application issues in staging and production environments. Azure Infrastructure Deploy and manage resources on Azure, including but not limited to VMs, databases, and Kubernetes clusters. Implement Infrastructure as Code practices using Azure Resource Manager (ARM) templates or terraform Monitoring and Alerting Using Open-Source Tools (Any one of the following) ELK Stack Implement and manage the ELK (Elasticsearch, Logstash, Kibana) stack for real-time log aggregation, monitoring, and analysis. Customize Kibana dashboards for different system metrics and logs to aid in quick issue resolution. Grafana Develop and maintain Grafana dashboards to visualize key performance indicators and system metrics. Integrate Grafana with other data sources and monitoring tools for comprehensive analytics. Loki Set up and manage Loki for aggregating and storing logs. Integrate Loki with Grafana for unified querying and visualization of metrics and logs. Prometheus Deploy and configure Prometheus for monitoring system and application metrics. Create custom Prometheus queries and alerts to catch anomalies and system performance issues. Mimir/Cortex (prefereable) Implement Mimir or Cortex for enhanced long-term storage and scalability of Prometheus metrics. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 months ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Matillion is The Data Productivity Cloud. We are on a mission to power the data productivity of our customers and the world, by helping teams get data business ready, faster. Our technology allows customers to load, transform, sync and orchestrate their data. We are looking for passionate, high-integrity individuals to help us scale up our growing business. Together, we can make a dent in the universe bigger than ourselves. With offices in the UK, US and Spain, we are now thrilled to announce the opening of our new office in Hyderabad, India. This marks an exciting milestone in our global expansion, and we are now looking for talented professionals to join us as part of our founding team. Role Scope We are now looking to add a Staff Site Reliability Engineer to #TeamGreen. This role can be based out of our Hybridad Office. Matillion is built around small development teams with responsibility for specific themes and initiatives. The Staff Site Reliability Engineer will be in the core & observability team which owns the operation and efficiency of our cloud platforms and services. The team is responsible for designing and implementing our cloud infrastructure, service reliability, service deployments, service observability and monitoring of Matillion products. What you will be doing Site Reliability Engineering Lead designs of major software components, systems, and features to improve the availability, scalability, latency, and efficiency of Matillion’s services Lead sustainable incident response, blameless postmortems, and production improvements that result in direct business opportunities for Matillion Provide guidance to other team members on managing end-to-end availability and performance of critical services, on building automation to prevent problem recurrence, and on building automated responses for non-exceptional service conditions Mentor and train other team members on design techniques and coding standards, and to cultivate innovation and collaboration across multiple teams Manage individual projects priorities, deadlines, and deliverables What we are looking for Essential Skills Passion for performance, observability, availability, scalability and security Have previous experience of large scale web operations in a public cloud environment Be competent in Ruby, Go, Java, Python or an equivalent programming language Have worked with some of the following key technologies: Prometheus, Grafana, Elasticsearch, Logstash, Kibana, OpenTelemetry, Micrometer, New Relic, Data Dog Be experienced with cloudformation, terraform and any other infrastructure-as-code technologies Have a solid understanding of networking systems and protocols Be confident in your ability to own and deliver projects and issues to resolution using Agile methodologies and demonstrate a definite bias for action and focus on results Be an excellent communicator and cross-team collaborator Strive for personal excellence through continuous development and by keeping current with developments and offerings in the observability field Have a passion for solving problems Personal Capabilities Required, e.g. skills, attitude, strengths Inquisitiveness- digging into problems and solutions to understand the underlying technology Autonomy - ability to work on a task and solve problems independently Motivation - sets personal challenges and constantly looking to stretch themselves Problem solving - recognition of problems and recasting difficult-to-solve problems in order to find unique and innovative solutions Integrity - honest and transparent in dealing, open to voice and accept criticism, is trustworthy and builds credibility through actions Detail focussed - pays attention to the details and can make a conscious effort to understand causes instead of just the effects Big picture aware - understands the scope and impact of a problem or solution Matillion has fostered a culture that is collaborative, fast-paced, ambitious, and transparent, and an environment where people genuinely care about their colleagues and communities. Our 6 core values guide how we work together and with our customers and partners. We operate a truly flexible and hybrid working culture that promotes work-life balance, and are proud to be able to offer the following benefits: - Company Equity - 27 days paid time off - 12 days of Company Holiday - 5 days paid volunteering leave - Group Mediclaim (GMC) - Enhanced parental leave policies - MacBook Pro - Access to various tools to aid your career development More about Matillion Thousands of enterprises including Cisco, DocuSign, Slack, and TUI trust Matillion technology to load, transform, sync, and orchestrate their data for a wide range of use cases from insights and operational analytics, to data science, machine learning, and AI. With over $300M raised from top Silicon Valley investors, we are on a mission to power the data productivity of our customers and the world. We are passionate about doing things in a smart, considerate way. We’re honoured to be named a great place to work for several years running by multiple industry research firms. We are dual headquartered in Manchester, UK and Denver, Colorado. We are keen to hear from prospective Matillioners, so even if you don’t feel you match all the criteria please apply and a member of our Talent Acquisition team will be in touch. Alternatively, if you are interested in Matillion but don't see a suitable role, please email talent@matillion.com. Matillion is an equal opportunity employer. We celebrate diversity and we are committed to creating an inclusive environment for all of our team. Matillion prohibits discrimination and harassment of any type. Matillion does not discriminate on the basis of race, colour, religion, age, sex, national origin, disability status, genetics, sexual orientation, gender identity or expression, or any other characteristic protected by law. Show more Show less
Posted 2 months ago
7 years
0 Lacs
Gurugram, Haryana, India
On-site
Company Description 👋🏼We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 36 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Experience: 7+ Years Extensive Experience with Azure cloud platform Good Experience in maintaining cost-efficient, scalable cloud environments for the organization involving best practices for monitoring and cloud governance Experience with CI tools like Jenkins and building end to end CI/CD pipelines for projects Experience with various build tools like Maven/Ant/Gradle Rich Experience with container frameworks like Docker, Kubernetes or cloud native container services Good Experience in Infrastructure as a Code (IaC) using tools like Terraform Good Experience with anyone CM tools of following: Ansible, Chef, Saltstack, Puppet Good Experience in monitoring tools like Prometheus & Grafana, Nagios/ DataDog/Zabbix and logging tools like Splunk/LogStash Good Experience in scripting and automation using languages like Bash/Shell, Python, PowerShell, Groovy, Perl. Configure and manage data sources like MySQL, Mongo, Elasticsearch, Redis, Cassandra, Hadoop, PostgreSQL, Neo4J etc Good experience on managing version control tool like Git, SVN/BitBucket Good problem-solving ability, strong written and verbal communication skills RESPONSIBILITIES: Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the client’s requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Show more Show less
Posted 2 months ago
0 years
0 Lacs
Patna, Bihar, India
Remote
Experience : 4.00 + years Salary : INR 2100000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Adfolks LLC- A ZainTECH Company) (*Note: This is a requirement for one of Uplers' client - Adfolks LLC- A ZainTECH Company) What do you need for this opportunity? Must have skills required: Elk Stack, Grafana, OpenShift, Prometheus, Rancher, DevOps, Terraform, AWS, Azure, Kubernetes, Linux Adfolks LLC- A ZainTECH Company is Looking for: Adfolks is seeking Cloud Engineer who can join immediately to work in a high visibility, technically interesting project. Location: Remote About Our Company Adfolks LLC which is a Dubai based technology services company for 7 years now with key focus areas in DataScience & Engineering, Cloud Services, Application Modernization, and Cyber Security in the Middle East region. We are an Advanced Consulting Partner with AWS, Microsoft Azure Gold Partner and Google Cloud Partner and, we are the only KCSP [Kubernetes Certified Service Provider] in the region. Visit our website https://adfolks.com/ to know more. Job Description Experience: 4+ Years Summary We are looking for a passionate, innovate professional to join our cloud services team. You’ll work in a collaborative and inclusive environment that values diverse perspectives and continuous learning and provides industry-leading benefits with unmatched opportunities for career growth. Key accountabilities include development and maintenance of cloud platforms, services and components to enable safe enterprise-wide use of cloud common functionality. Requirements Bachelor’s degree in Computer Science, related Engineering field, or equivalent experience 4+ years of experience in public cloud infrastructure, especially Azure and AWS. Good understanding of cloud infrastructure, and different deployment models Should be familiar with cloud networking and security solutions like load balancer, firewall, WAF, CSPM, security group, etc. Good understanding of identity and access management solutions like Active directory, Azure AD, conditional access, IAM and other vendor specific solutions Good understanding of Linux and windows based systems Understanding of SQL & NoSQL Databases including IAAS and PAAS models. Experience in policy management, governance, monitoring and alerts Knowledge in microservices, DevOps and IaC (Terraform and Ansible). Azure AZ-104 or AWS administrator certification would be an advantage Excellent communication and interpersonal skills Job responsibilities Assist application team to deploy various solutions in the cloud environment. Maintain infrastructure security and governance as per the client requirement and standards. Support other team members (database, network, security, etc.) to configure and maintain respective solution. Actively Involve in discussions related to new solution implementation, design creation and all other discussions related to cloud infrastructure. POC deployment, documentation, and technical presentation. Linux Hosting and Administration Install, configure, and maintain Linux servers, ensuring optimal performance and security. Handle Linux-based hosting solutions, including web servers, databases, and other services. Apply patches and updates to Linux servers as required, and automate routine tasks. Monitor system performance, troubleshoot issues, and conduct root cause analysis for any server downtime. Kubernetes Operations Deploy, manage, and maintain containerized applications using Kubernetes. Create and manage Kubernetes manifests, helm charts, and operators for complex application architectures. Scale applications based on resource utilization and requirements. Monitor the health and performance of Kubernetes clusters and take corrective actions as needed. DevOps Integration Implement and maintain CI/CD pipelines for automated testing and deployments. Assist in incorporating containerization and orchestration into the DevOps process. Rancher/OpenShift Expertise (Nice to Have) Experience in deploying and managing Kubernetes clusters using Rancher or OpenShift. Implement monitoring, logging, and auto-scaling solutions in Rancher or OpenShift environments. Application Support Gain a thorough understanding of the applications running within containers to provide first-level application support. Collaborate with development teams to debug application issues in staging and production environments. Azure Infrastructure Deploy and manage resources on Azure, including but not limited to VMs, databases, and Kubernetes clusters. Implement Infrastructure as Code practices using Azure Resource Manager (ARM) templates or terraform Monitoring and Alerting Using Open-Source Tools (Any one of the following) ELK Stack Implement and manage the ELK (Elasticsearch, Logstash, Kibana) stack for real-time log aggregation, monitoring, and analysis. Customize Kibana dashboards for different system metrics and logs to aid in quick issue resolution. Grafana Develop and maintain Grafana dashboards to visualize key performance indicators and system metrics. Integrate Grafana with other data sources and monitoring tools for comprehensive analytics. Loki Set up and manage Loki for aggregating and storing logs. Integrate Loki with Grafana for unified querying and visualization of metrics and logs. Prometheus Deploy and configure Prometheus for monitoring system and application metrics. Create custom Prometheus queries and alerts to catch anomalies and system performance issues. Mimir/Cortex (prefereable) Implement Mimir or Cortex for enhanced long-term storage and scalability of Prometheus metrics. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 months ago
0 years
0 Lacs
Agra, Uttar Pradesh, India
Remote
Experience : 4.00 + years Salary : INR 2100000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Adfolks LLC- A ZainTECH Company) (*Note: This is a requirement for one of Uplers' client - Adfolks LLC- A ZainTECH Company) What do you need for this opportunity? Must have skills required: Elk Stack, Grafana, OpenShift, Prometheus, Rancher, DevOps, Terraform, AWS, Azure, Kubernetes, Linux Adfolks LLC- A ZainTECH Company is Looking for: Adfolks is seeking Cloud Engineer who can join immediately to work in a high visibility, technically interesting project. Location: Remote About Our Company Adfolks LLC which is a Dubai based technology services company for 7 years now with key focus areas in DataScience & Engineering, Cloud Services, Application Modernization, and Cyber Security in the Middle East region. We are an Advanced Consulting Partner with AWS, Microsoft Azure Gold Partner and Google Cloud Partner and, we are the only KCSP [Kubernetes Certified Service Provider] in the region. Visit our website https://adfolks.com/ to know more. Job Description Experience: 4+ Years Summary We are looking for a passionate, innovate professional to join our cloud services team. You’ll work in a collaborative and inclusive environment that values diverse perspectives and continuous learning and provides industry-leading benefits with unmatched opportunities for career growth. Key accountabilities include development and maintenance of cloud platforms, services and components to enable safe enterprise-wide use of cloud common functionality. Requirements Bachelor’s degree in Computer Science, related Engineering field, or equivalent experience 4+ years of experience in public cloud infrastructure, especially Azure and AWS. Good understanding of cloud infrastructure, and different deployment models Should be familiar with cloud networking and security solutions like load balancer, firewall, WAF, CSPM, security group, etc. Good understanding of identity and access management solutions like Active directory, Azure AD, conditional access, IAM and other vendor specific solutions Good understanding of Linux and windows based systems Understanding of SQL & NoSQL Databases including IAAS and PAAS models. Experience in policy management, governance, monitoring and alerts Knowledge in microservices, DevOps and IaC (Terraform and Ansible). Azure AZ-104 or AWS administrator certification would be an advantage Excellent communication and interpersonal skills Job responsibilities Assist application team to deploy various solutions in the cloud environment. Maintain infrastructure security and governance as per the client requirement and standards. Support other team members (database, network, security, etc.) to configure and maintain respective solution. Actively Involve in discussions related to new solution implementation, design creation and all other discussions related to cloud infrastructure. POC deployment, documentation, and technical presentation. Linux Hosting and Administration Install, configure, and maintain Linux servers, ensuring optimal performance and security. Handle Linux-based hosting solutions, including web servers, databases, and other services. Apply patches and updates to Linux servers as required, and automate routine tasks. Monitor system performance, troubleshoot issues, and conduct root cause analysis for any server downtime. Kubernetes Operations Deploy, manage, and maintain containerized applications using Kubernetes. Create and manage Kubernetes manifests, helm charts, and operators for complex application architectures. Scale applications based on resource utilization and requirements. Monitor the health and performance of Kubernetes clusters and take corrective actions as needed. DevOps Integration Implement and maintain CI/CD pipelines for automated testing and deployments. Assist in incorporating containerization and orchestration into the DevOps process. Rancher/OpenShift Expertise (Nice to Have) Experience in deploying and managing Kubernetes clusters using Rancher or OpenShift. Implement monitoring, logging, and auto-scaling solutions in Rancher or OpenShift environments. Application Support Gain a thorough understanding of the applications running within containers to provide first-level application support. Collaborate with development teams to debug application issues in staging and production environments. Azure Infrastructure Deploy and manage resources on Azure, including but not limited to VMs, databases, and Kubernetes clusters. Implement Infrastructure as Code practices using Azure Resource Manager (ARM) templates or terraform Monitoring and Alerting Using Open-Source Tools (Any one of the following) ELK Stack Implement and manage the ELK (Elasticsearch, Logstash, Kibana) stack for real-time log aggregation, monitoring, and analysis. Customize Kibana dashboards for different system metrics and logs to aid in quick issue resolution. Grafana Develop and maintain Grafana dashboards to visualize key performance indicators and system metrics. Integrate Grafana with other data sources and monitoring tools for comprehensive analytics. Loki Set up and manage Loki for aggregating and storing logs. Integrate Loki with Grafana for unified querying and visualization of metrics and logs. Prometheus Deploy and configure Prometheus for monitoring system and application metrics. Create custom Prometheus queries and alerts to catch anomalies and system performance issues. Mimir/Cortex (prefereable) Implement Mimir or Cortex for enhanced long-term storage and scalability of Prometheus metrics. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 months ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 4.00 + years Salary : INR 2100000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Adfolks LLC- A ZainTECH Company) (*Note: This is a requirement for one of Uplers' client - Adfolks LLC- A ZainTECH Company) What do you need for this opportunity? Must have skills required: Elk Stack, Grafana, OpenShift, Prometheus, Rancher, DevOps, Terraform, AWS, Azure, Kubernetes, Linux Adfolks LLC- A ZainTECH Company is Looking for: Adfolks is seeking Cloud Engineer who can join immediately to work in a high visibility, technically interesting project. Location: Remote About Our Company Adfolks LLC which is a Dubai based technology services company for 7 years now with key focus areas in DataScience & Engineering, Cloud Services, Application Modernization, and Cyber Security in the Middle East region. We are an Advanced Consulting Partner with AWS, Microsoft Azure Gold Partner and Google Cloud Partner and, we are the only KCSP [Kubernetes Certified Service Provider] in the region. Visit our website https://adfolks.com/ to know more. Job Description Experience: 4+ Years Summary We are looking for a passionate, innovate professional to join our cloud services team. You’ll work in a collaborative and inclusive environment that values diverse perspectives and continuous learning and provides industry-leading benefits with unmatched opportunities for career growth. Key accountabilities include development and maintenance of cloud platforms, services and components to enable safe enterprise-wide use of cloud common functionality. Requirements Bachelor’s degree in Computer Science, related Engineering field, or equivalent experience 4+ years of experience in public cloud infrastructure, especially Azure and AWS. Good understanding of cloud infrastructure, and different deployment models Should be familiar with cloud networking and security solutions like load balancer, firewall, WAF, CSPM, security group, etc. Good understanding of identity and access management solutions like Active directory, Azure AD, conditional access, IAM and other vendor specific solutions Good understanding of Linux and windows based systems Understanding of SQL & NoSQL Databases including IAAS and PAAS models. Experience in policy management, governance, monitoring and alerts Knowledge in microservices, DevOps and IaC (Terraform and Ansible). Azure AZ-104 or AWS administrator certification would be an advantage Excellent communication and interpersonal skills Job responsibilities Assist application team to deploy various solutions in the cloud environment. Maintain infrastructure security and governance as per the client requirement and standards. Support other team members (database, network, security, etc.) to configure and maintain respective solution. Actively Involve in discussions related to new solution implementation, design creation and all other discussions related to cloud infrastructure. POC deployment, documentation, and technical presentation. Linux Hosting and Administration Install, configure, and maintain Linux servers, ensuring optimal performance and security. Handle Linux-based hosting solutions, including web servers, databases, and other services. Apply patches and updates to Linux servers as required, and automate routine tasks. Monitor system performance, troubleshoot issues, and conduct root cause analysis for any server downtime. Kubernetes Operations Deploy, manage, and maintain containerized applications using Kubernetes. Create and manage Kubernetes manifests, helm charts, and operators for complex application architectures. Scale applications based on resource utilization and requirements. Monitor the health and performance of Kubernetes clusters and take corrective actions as needed. DevOps Integration Implement and maintain CI/CD pipelines for automated testing and deployments. Assist in incorporating containerization and orchestration into the DevOps process. Rancher/OpenShift Expertise (Nice to Have) Experience in deploying and managing Kubernetes clusters using Rancher or OpenShift. Implement monitoring, logging, and auto-scaling solutions in Rancher or OpenShift environments. Application Support Gain a thorough understanding of the applications running within containers to provide first-level application support. Collaborate with development teams to debug application issues in staging and production environments. Azure Infrastructure Deploy and manage resources on Azure, including but not limited to VMs, databases, and Kubernetes clusters. Implement Infrastructure as Code practices using Azure Resource Manager (ARM) templates or terraform Monitoring and Alerting Using Open-Source Tools (Any one of the following) ELK Stack Implement and manage the ELK (Elasticsearch, Logstash, Kibana) stack for real-time log aggregation, monitoring, and analysis. Customize Kibana dashboards for different system metrics and logs to aid in quick issue resolution. Grafana Develop and maintain Grafana dashboards to visualize key performance indicators and system metrics. Integrate Grafana with other data sources and monitoring tools for comprehensive analytics. Loki Set up and manage Loki for aggregating and storing logs. Integrate Loki with Grafana for unified querying and visualization of metrics and logs. Prometheus Deploy and configure Prometheus for monitoring system and application metrics. Create custom Prometheus queries and alerts to catch anomalies and system performance issues. Mimir/Cortex (prefereable) Implement Mimir or Cortex for enhanced long-term storage and scalability of Prometheus metrics. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 months ago
0 years
0 Lacs
Surat, Gujarat, India
Remote
Experience : 4.00 + years Salary : INR 2100000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Adfolks LLC- A ZainTECH Company) (*Note: This is a requirement for one of Uplers' client - Adfolks LLC- A ZainTECH Company) What do you need for this opportunity? Must have skills required: Elk Stack, Grafana, OpenShift, Prometheus, Rancher, DevOps, Terraform, AWS, Azure, Kubernetes, Linux Adfolks LLC- A ZainTECH Company is Looking for: Adfolks is seeking Cloud Engineer who can join immediately to work in a high visibility, technically interesting project. Location: Remote About Our Company Adfolks LLC which is a Dubai based technology services company for 7 years now with key focus areas in DataScience & Engineering, Cloud Services, Application Modernization, and Cyber Security in the Middle East region. We are an Advanced Consulting Partner with AWS, Microsoft Azure Gold Partner and Google Cloud Partner and, we are the only KCSP [Kubernetes Certified Service Provider] in the region. Visit our website https://adfolks.com/ to know more. Job Description Experience: 4+ Years Summary We are looking for a passionate, innovate professional to join our cloud services team. You’ll work in a collaborative and inclusive environment that values diverse perspectives and continuous learning and provides industry-leading benefits with unmatched opportunities for career growth. Key accountabilities include development and maintenance of cloud platforms, services and components to enable safe enterprise-wide use of cloud common functionality. Requirements Bachelor’s degree in Computer Science, related Engineering field, or equivalent experience 4+ years of experience in public cloud infrastructure, especially Azure and AWS. Good understanding of cloud infrastructure, and different deployment models Should be familiar with cloud networking and security solutions like load balancer, firewall, WAF, CSPM, security group, etc. Good understanding of identity and access management solutions like Active directory, Azure AD, conditional access, IAM and other vendor specific solutions Good understanding of Linux and windows based systems Understanding of SQL & NoSQL Databases including IAAS and PAAS models. Experience in policy management, governance, monitoring and alerts Knowledge in microservices, DevOps and IaC (Terraform and Ansible). Azure AZ-104 or AWS administrator certification would be an advantage Excellent communication and interpersonal skills Job responsibilities Assist application team to deploy various solutions in the cloud environment. Maintain infrastructure security and governance as per the client requirement and standards. Support other team members (database, network, security, etc.) to configure and maintain respective solution. Actively Involve in discussions related to new solution implementation, design creation and all other discussions related to cloud infrastructure. POC deployment, documentation, and technical presentation. Linux Hosting and Administration Install, configure, and maintain Linux servers, ensuring optimal performance and security. Handle Linux-based hosting solutions, including web servers, databases, and other services. Apply patches and updates to Linux servers as required, and automate routine tasks. Monitor system performance, troubleshoot issues, and conduct root cause analysis for any server downtime. Kubernetes Operations Deploy, manage, and maintain containerized applications using Kubernetes. Create and manage Kubernetes manifests, helm charts, and operators for complex application architectures. Scale applications based on resource utilization and requirements. Monitor the health and performance of Kubernetes clusters and take corrective actions as needed. DevOps Integration Implement and maintain CI/CD pipelines for automated testing and deployments. Assist in incorporating containerization and orchestration into the DevOps process. Rancher/OpenShift Expertise (Nice to Have) Experience in deploying and managing Kubernetes clusters using Rancher or OpenShift. Implement monitoring, logging, and auto-scaling solutions in Rancher or OpenShift environments. Application Support Gain a thorough understanding of the applications running within containers to provide first-level application support. Collaborate with development teams to debug application issues in staging and production environments. Azure Infrastructure Deploy and manage resources on Azure, including but not limited to VMs, databases, and Kubernetes clusters. Implement Infrastructure as Code practices using Azure Resource Manager (ARM) templates or terraform Monitoring and Alerting Using Open-Source Tools (Any one of the following) ELK Stack Implement and manage the ELK (Elasticsearch, Logstash, Kibana) stack for real-time log aggregation, monitoring, and analysis. Customize Kibana dashboards for different system metrics and logs to aid in quick issue resolution. Grafana Develop and maintain Grafana dashboards to visualize key performance indicators and system metrics. Integrate Grafana with other data sources and monitoring tools for comprehensive analytics. Loki Set up and manage Loki for aggregating and storing logs. Integrate Loki with Grafana for unified querying and visualization of metrics and logs. Prometheus Deploy and configure Prometheus for monitoring system and application metrics. Create custom Prometheus queries and alerts to catch anomalies and system performance issues. Mimir/Cortex (prefereable) Implement Mimir or Cortex for enhanced long-term storage and scalability of Prometheus metrics. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 months ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Experience : 4.00 + years Salary : INR 2100000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Adfolks LLC- A ZainTECH Company) (*Note: This is a requirement for one of Uplers' client - Adfolks LLC- A ZainTECH Company) What do you need for this opportunity? Must have skills required: Elk Stack, Grafana, OpenShift, Prometheus, Rancher, DevOps, Terraform, AWS, Azure, Kubernetes, Linux Adfolks LLC- A ZainTECH Company is Looking for: Adfolks is seeking Cloud Engineer who can join immediately to work in a high visibility, technically interesting project. Location: Remote About Our Company Adfolks LLC which is a Dubai based technology services company for 7 years now with key focus areas in DataScience & Engineering, Cloud Services, Application Modernization, and Cyber Security in the Middle East region. We are an Advanced Consulting Partner with AWS, Microsoft Azure Gold Partner and Google Cloud Partner and, we are the only KCSP [Kubernetes Certified Service Provider] in the region. Visit our website https://adfolks.com/ to know more. Job Description Experience: 4+ Years Summary We are looking for a passionate, innovate professional to join our cloud services team. You’ll work in a collaborative and inclusive environment that values diverse perspectives and continuous learning and provides industry-leading benefits with unmatched opportunities for career growth. Key accountabilities include development and maintenance of cloud platforms, services and components to enable safe enterprise-wide use of cloud common functionality. Requirements Bachelor’s degree in Computer Science, related Engineering field, or equivalent experience 4+ years of experience in public cloud infrastructure, especially Azure and AWS. Good understanding of cloud infrastructure, and different deployment models Should be familiar with cloud networking and security solutions like load balancer, firewall, WAF, CSPM, security group, etc. Good understanding of identity and access management solutions like Active directory, Azure AD, conditional access, IAM and other vendor specific solutions Good understanding of Linux and windows based systems Understanding of SQL & NoSQL Databases including IAAS and PAAS models. Experience in policy management, governance, monitoring and alerts Knowledge in microservices, DevOps and IaC (Terraform and Ansible). Azure AZ-104 or AWS administrator certification would be an advantage Excellent communication and interpersonal skills Job responsibilities Assist application team to deploy various solutions in the cloud environment. Maintain infrastructure security and governance as per the client requirement and standards. Support other team members (database, network, security, etc.) to configure and maintain respective solution. Actively Involve in discussions related to new solution implementation, design creation and all other discussions related to cloud infrastructure. POC deployment, documentation, and technical presentation. Linux Hosting and Administration Install, configure, and maintain Linux servers, ensuring optimal performance and security. Handle Linux-based hosting solutions, including web servers, databases, and other services. Apply patches and updates to Linux servers as required, and automate routine tasks. Monitor system performance, troubleshoot issues, and conduct root cause analysis for any server downtime. Kubernetes Operations Deploy, manage, and maintain containerized applications using Kubernetes. Create and manage Kubernetes manifests, helm charts, and operators for complex application architectures. Scale applications based on resource utilization and requirements. Monitor the health and performance of Kubernetes clusters and take corrective actions as needed. DevOps Integration Implement and maintain CI/CD pipelines for automated testing and deployments. Assist in incorporating containerization and orchestration into the DevOps process. Rancher/OpenShift Expertise (Nice to Have) Experience in deploying and managing Kubernetes clusters using Rancher or OpenShift. Implement monitoring, logging, and auto-scaling solutions in Rancher or OpenShift environments. Application Support Gain a thorough understanding of the applications running within containers to provide first-level application support. Collaborate with development teams to debug application issues in staging and production environments. Azure Infrastructure Deploy and manage resources on Azure, including but not limited to VMs, databases, and Kubernetes clusters. Implement Infrastructure as Code practices using Azure Resource Manager (ARM) templates or terraform Monitoring and Alerting Using Open-Source Tools (Any one of the following) ELK Stack Implement and manage the ELK (Elasticsearch, Logstash, Kibana) stack for real-time log aggregation, monitoring, and analysis. Customize Kibana dashboards for different system metrics and logs to aid in quick issue resolution. Grafana Develop and maintain Grafana dashboards to visualize key performance indicators and system metrics. Integrate Grafana with other data sources and monitoring tools for comprehensive analytics. Loki Set up and manage Loki for aggregating and storing logs. Integrate Loki with Grafana for unified querying and visualization of metrics and logs. Prometheus Deploy and configure Prometheus for monitoring system and application metrics. Create custom Prometheus queries and alerts to catch anomalies and system performance issues. Mimir/Cortex (prefereable) Implement Mimir or Cortex for enhanced long-term storage and scalability of Prometheus metrics. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 months ago
0 years
0 Lacs
Thane, Maharashtra, India
Remote
Experience : 4.00 + years Salary : INR 2100000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Adfolks LLC- A ZainTECH Company) (*Note: This is a requirement for one of Uplers' client - Adfolks LLC- A ZainTECH Company) What do you need for this opportunity? Must have skills required: Elk Stack, Grafana, OpenShift, Prometheus, Rancher, DevOps, Terraform, AWS, Azure, Kubernetes, Linux Adfolks LLC- A ZainTECH Company is Looking for: Adfolks is seeking Cloud Engineer who can join immediately to work in a high visibility, technically interesting project. Location: Remote About Our Company Adfolks LLC which is a Dubai based technology services company for 7 years now with key focus areas in DataScience & Engineering, Cloud Services, Application Modernization, and Cyber Security in the Middle East region. We are an Advanced Consulting Partner with AWS, Microsoft Azure Gold Partner and Google Cloud Partner and, we are the only KCSP [Kubernetes Certified Service Provider] in the region. Visit our website https://adfolks.com/ to know more. Job Description Experience: 4+ Years Summary We are looking for a passionate, innovate professional to join our cloud services team. You’ll work in a collaborative and inclusive environment that values diverse perspectives and continuous learning and provides industry-leading benefits with unmatched opportunities for career growth. Key accountabilities include development and maintenance of cloud platforms, services and components to enable safe enterprise-wide use of cloud common functionality. Requirements Bachelor’s degree in Computer Science, related Engineering field, or equivalent experience 4+ years of experience in public cloud infrastructure, especially Azure and AWS. Good understanding of cloud infrastructure, and different deployment models Should be familiar with cloud networking and security solutions like load balancer, firewall, WAF, CSPM, security group, etc. Good understanding of identity and access management solutions like Active directory, Azure AD, conditional access, IAM and other vendor specific solutions Good understanding of Linux and windows based systems Understanding of SQL & NoSQL Databases including IAAS and PAAS models. Experience in policy management, governance, monitoring and alerts Knowledge in microservices, DevOps and IaC (Terraform and Ansible). Azure AZ-104 or AWS administrator certification would be an advantage Excellent communication and interpersonal skills Job responsibilities Assist application team to deploy various solutions in the cloud environment. Maintain infrastructure security and governance as per the client requirement and standards. Support other team members (database, network, security, etc.) to configure and maintain respective solution. Actively Involve in discussions related to new solution implementation, design creation and all other discussions related to cloud infrastructure. POC deployment, documentation, and technical presentation. Linux Hosting and Administration Install, configure, and maintain Linux servers, ensuring optimal performance and security. Handle Linux-based hosting solutions, including web servers, databases, and other services. Apply patches and updates to Linux servers as required, and automate routine tasks. Monitor system performance, troubleshoot issues, and conduct root cause analysis for any server downtime. Kubernetes Operations Deploy, manage, and maintain containerized applications using Kubernetes. Create and manage Kubernetes manifests, helm charts, and operators for complex application architectures. Scale applications based on resource utilization and requirements. Monitor the health and performance of Kubernetes clusters and take corrective actions as needed. DevOps Integration Implement and maintain CI/CD pipelines for automated testing and deployments. Assist in incorporating containerization and orchestration into the DevOps process. Rancher/OpenShift Expertise (Nice to Have) Experience in deploying and managing Kubernetes clusters using Rancher or OpenShift. Implement monitoring, logging, and auto-scaling solutions in Rancher or OpenShift environments. Application Support Gain a thorough understanding of the applications running within containers to provide first-level application support. Collaborate with development teams to debug application issues in staging and production environments. Azure Infrastructure Deploy and manage resources on Azure, including but not limited to VMs, databases, and Kubernetes clusters. Implement Infrastructure as Code practices using Azure Resource Manager (ARM) templates or terraform Monitoring and Alerting Using Open-Source Tools (Any one of the following) ELK Stack Implement and manage the ELK (Elasticsearch, Logstash, Kibana) stack for real-time log aggregation, monitoring, and analysis. Customize Kibana dashboards for different system metrics and logs to aid in quick issue resolution. Grafana Develop and maintain Grafana dashboards to visualize key performance indicators and system metrics. Integrate Grafana with other data sources and monitoring tools for comprehensive analytics. Loki Set up and manage Loki for aggregating and storing logs. Integrate Loki with Grafana for unified querying and visualization of metrics and logs. Prometheus Deploy and configure Prometheus for monitoring system and application metrics. Create custom Prometheus queries and alerts to catch anomalies and system performance issues. Mimir/Cortex (prefereable) Implement Mimir or Cortex for enhanced long-term storage and scalability of Prometheus metrics. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 months ago
0 years
0 Lacs
Kanpur, Uttar Pradesh, India
Remote
Experience : 4.00 + years Salary : INR 2100000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Adfolks LLC- A ZainTECH Company) (*Note: This is a requirement for one of Uplers' client - Adfolks LLC- A ZainTECH Company) What do you need for this opportunity? Must have skills required: Elk Stack, Grafana, OpenShift, Prometheus, Rancher, DevOps, Terraform, AWS, Azure, Kubernetes, Linux Adfolks LLC- A ZainTECH Company is Looking for: Adfolks is seeking Cloud Engineer who can join immediately to work in a high visibility, technically interesting project. Location: Remote About Our Company Adfolks LLC which is a Dubai based technology services company for 7 years now with key focus areas in DataScience & Engineering, Cloud Services, Application Modernization, and Cyber Security in the Middle East region. We are an Advanced Consulting Partner with AWS, Microsoft Azure Gold Partner and Google Cloud Partner and, we are the only KCSP [Kubernetes Certified Service Provider] in the region. Visit our website https://adfolks.com/ to know more. Job Description Experience: 4+ Years Summary We are looking for a passionate, innovate professional to join our cloud services team. You’ll work in a collaborative and inclusive environment that values diverse perspectives and continuous learning and provides industry-leading benefits with unmatched opportunities for career growth. Key accountabilities include development and maintenance of cloud platforms, services and components to enable safe enterprise-wide use of cloud common functionality. Requirements Bachelor’s degree in Computer Science, related Engineering field, or equivalent experience 4+ years of experience in public cloud infrastructure, especially Azure and AWS. Good understanding of cloud infrastructure, and different deployment models Should be familiar with cloud networking and security solutions like load balancer, firewall, WAF, CSPM, security group, etc. Good understanding of identity and access management solutions like Active directory, Azure AD, conditional access, IAM and other vendor specific solutions Good understanding of Linux and windows based systems Understanding of SQL & NoSQL Databases including IAAS and PAAS models. Experience in policy management, governance, monitoring and alerts Knowledge in microservices, DevOps and IaC (Terraform and Ansible). Azure AZ-104 or AWS administrator certification would be an advantage Excellent communication and interpersonal skills Job responsibilities Assist application team to deploy various solutions in the cloud environment. Maintain infrastructure security and governance as per the client requirement and standards. Support other team members (database, network, security, etc.) to configure and maintain respective solution. Actively Involve in discussions related to new solution implementation, design creation and all other discussions related to cloud infrastructure. POC deployment, documentation, and technical presentation. Linux Hosting and Administration Install, configure, and maintain Linux servers, ensuring optimal performance and security. Handle Linux-based hosting solutions, including web servers, databases, and other services. Apply patches and updates to Linux servers as required, and automate routine tasks. Monitor system performance, troubleshoot issues, and conduct root cause analysis for any server downtime. Kubernetes Operations Deploy, manage, and maintain containerized applications using Kubernetes. Create and manage Kubernetes manifests, helm charts, and operators for complex application architectures. Scale applications based on resource utilization and requirements. Monitor the health and performance of Kubernetes clusters and take corrective actions as needed. DevOps Integration Implement and maintain CI/CD pipelines for automated testing and deployments. Assist in incorporating containerization and orchestration into the DevOps process. Rancher/OpenShift Expertise (Nice to Have) Experience in deploying and managing Kubernetes clusters using Rancher or OpenShift. Implement monitoring, logging, and auto-scaling solutions in Rancher or OpenShift environments. Application Support Gain a thorough understanding of the applications running within containers to provide first-level application support. Collaborate with development teams to debug application issues in staging and production environments. Azure Infrastructure Deploy and manage resources on Azure, including but not limited to VMs, databases, and Kubernetes clusters. Implement Infrastructure as Code practices using Azure Resource Manager (ARM) templates or terraform Monitoring and Alerting Using Open-Source Tools (Any one of the following) ELK Stack Implement and manage the ELK (Elasticsearch, Logstash, Kibana) stack for real-time log aggregation, monitoring, and analysis. Customize Kibana dashboards for different system metrics and logs to aid in quick issue resolution. Grafana Develop and maintain Grafana dashboards to visualize key performance indicators and system metrics. Integrate Grafana with other data sources and monitoring tools for comprehensive analytics. Loki Set up and manage Loki for aggregating and storing logs. Integrate Loki with Grafana for unified querying and visualization of metrics and logs. Prometheus Deploy and configure Prometheus for monitoring system and application metrics. Create custom Prometheus queries and alerts to catch anomalies and system performance issues. Mimir/Cortex (prefereable) Implement Mimir or Cortex for enhanced long-term storage and scalability of Prometheus metrics. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 months ago
21 - 31 years
50 - 70 Lacs
Bengaluru
Work from Office
What we’re looking for As a member of the infrastructure team at Survey Monkey, you will have a direct impact in designing, engineering and maintaining our Cloud, Messaging and Observability Platform. Solutioning with best practices, deployment processes, architecture, and support the ongoing operation of our multi-tenant AWS environments. This role presents a prime opportunity for building world-class infrastructure, solving complex problems at scale, learning new technologies and offering mentorship to other engineers. What you’ll be working on Architect, build, and operate AWS environments at scale with well-established industry best practices. Automating infrastructure provisioning, DevOps, and/or continuous integration/delivery. Provide Technical Leadership & Mentorship Mentor and guide senior engineers to build technical expertise and drive a culture of excellence in software development. Foster collaboration within the engineering team, ensuring the adoption of best practices in coding, testing, and deployment. Review code and provide constructive feedback to ensure code quality and adherence to architectural principles. Collaboration & Cross-Functional Leadership Collaborate with cross-functional teams (Product, Security, and other Engineering teams) to drive the roadmap and ensure alignment with business objectives. Provide technical leadership in meetings and discussions, influencing key decisions on architecture, design, and implementation. Innovation & Continuous Improvement Propose, evaluate, and integrate new tools and technologies to improve the performance, security, and scalability of the cloud platform. Drive initiatives for optimizing cloud resource usage and reducing operational costs without compromising performance. Write libraries and APIs that provide a simple, unified interface to other developers when they use our monitoring, logging, and event-processing systems. Participate in on-call rotation. Support and partner with other teams on improving our observability systems to monitor site stability and performance We’d love to hear from people with: 12+ years of relevant professional experience with cloud platforms such as AWS, Heroku. Extensive experience leading design sessions and evolving well-architected environments in AWS at scale. Extensive experience with Terraform, Docker, Kubernetes, scripting (Bash/Python/Yaml), and helm. Experience with Splunk, OpenTelemetry, CloudWatch, or tools like New Relic, Datadog, or Grafana/Prometheus, ELK (Elasticsearch/Logstash/Kibana). Experience with metrics and logging libraries and aggregators, data analysis and visualization tools – Specifically Splunk and Otel. Experience instrumenting PHP, Python, Java and Node.js applications to send metrics, traces, and logs to third-party Observability tooling. Experience with GitOps and tools like ArgoCD/fluxcd. Interest in Instrumentation and Optimization of Kubernetes Clusters. Ability to listen and partner to understand requirements, troubleshoot problems, or promote the adoption of platforms. Experience with GitHub/GitHub Actions/Jenkins/Gitlab in either a software engineering or DevOps environment. Familiarity with databases and caching technologies, including PostgreSQL, MongoDB, Elasticsearch, Memcached, Redis, Kafka and Debezium. Preferably experience with secrets management, for example Hashicorp Vault. Preferably experience in an agile environment and JIRA. SurveyMonkey believes in-person collaboration is valuable for building relationships, fostering community, and enhancing our speed and execution in problem-solving and decision-making. As such, this opportunity is hybrid and requires you to work from the SurveyMonkey office in Bengaluru 3 days per week. #LI - Hybrid
Posted 2 months ago
21 - 31 years
35 - 42 Lacs
Bengaluru
Work from Office
What we’re looking for As a member of the Infrastructure team at Survey Monkey, you will have a direct impact in designing, engineering and maintaining our Cloud, Messaging and Observability Platform. Solutioning with best practices, deployment processes, architecture, and support the ongoing operation of our multi-tenant AWS environments. This role presents a prime opportunity for building world-class infrastructure, solving complex problems at scale, learning new technologies and offering mentorship to other engineers. What you’ll be working on Architect, build, and operate AWS environments at scale with well-established industry best practices Automating infrastructure provisioning, DevOps, and/or continuous integration/delivery Support and maintain AWS services, such as EKS, Heroku Write libraries and APIs that provide a simple, unified interface to other developers when they use our monitoring, logging, and event-processing systems Support and partner with other teams on improving our observability systems to monitor site stability and performance Work closely with developers in supporting new features and services. Work in a highly collaborative team environment. Participate in on-call rotation We’d love to hear from people with 8+ years of relevant professional experience with cloud platforms such as AWS, Heroku. Extensive experience with Terraform, Docker, Kubernetes, scripting (Bash/Python/Yaml), and helm. Experience with Splunk, Open Telemetry, CloudWatch, or tools like New Relic, Datadog, or Grafana/Prometheus, ELK (Elasticsearch/Logstash/Kibana). Experience with metrics and logging libraries and aggregators, data analysis and visualization tools – Specifically Splunk and Otel. Experience instrumenting PHP, Python, Java and Node.js applications to send metrics, traces, and logs to third-party Observability tooling. Experience with GitOps and tools like ArgoCD/fluxcd. Interest in Instrumentation and Optimization of Kubernetes Clusters. Ability to listen and partner to understand requirements, troubleshoot problems, or promote the adoption of platforms. Experience with GitHub/GitHub Actions/Jenkins/Gitlab in either a software engineering or DevOps environment. Familiarity with databases and caching technologies, including PostgreSQL, MongoDB, Elasticsearch, Memcached, Redis, Kafka and Debezium. Preferably experience with secrets management, for example Hashicorp Vault. Preferably experience in an agile environment and JIRA. SurveyMonkey believes in-person collaboration is valuable for building relationships, fostering community, and enhancing our speed and execution in problem-solving and decision-making. As such, this opportunity is hybrid and requires you to work from the SurveyMonkey office in Bengaluru 3 days per week. #LI - Hybrid
Posted 2 months ago
3 - 8 years
10 - 20 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Job Description: As an ELK (Elastic, Logstash & Kibana) Data Engineer, you would be responsible for developing, implementing, and maintaining the ELK stack-based solutions for Kyndryl s clients. This role would be responsible to develop efficient and effective, data & log ingestion, processing, indexing, and visualization for monitoring, troubleshooting, and analysis purposes. Key Responsibilities : Configure Logstash to receive, filter, and transform logs from diverse sources (e.g., servers, applications, AppDynamics, Storage, Databases and so son) before sending them to Elasticsearch. Configure ILM policies, Index templates etc. Develop Logstash configuration files to parse, enrich, and filter log data from various input sources (e.g., APM tools, Database, Storage and so on) Implement techniques like grok patterns, regular expressions, and plugins to handle complex log formats and structures. Ensure efficient and reliable data ingestion by optimizing Logstash performance, handling high data volumes, and managing throughput. Utilize Kibana to create visually appealing dashboards, reports, and custom visualizations. Collaborate with business users to understand their data integration & visualization needs and translate them into technical solutions Establishing the correlation within the data and develop visualizations to detect the root cause of the issue. Integration with ticketing tools such as Service Now Hands on with ML and Watcher functionalities Monitor Elasticsearch clusters for health, performance, and resource utilization Create and maintain technical documentation, including system diagrams, deployment procedures, and troubleshooting guides Education, Experience, and Certification Requirements: BS or MS degree in Computer Science or a related technical field 5+ years overall IT Industry Experience. 3+ years of development experience with Elastic, Logstash and Kibana in designing, building, and maintaining log & data processing systems 3+ years of Python or Java development experience 4+ years of SQL experience (No-SQL experience is a plus) 4+ years of experience with schema design and dimensional data modelling Experience working with Machine Learning model is a plus Knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes) is a plus Elastic Certified Engineer” certification is preferrable
Posted 2 months ago
6 - 11 years
3 - 7 Lacs
Bengaluru
Work from Office
About The Role We are looking for a skilled Elasticsearch Developer to design, develop, and optimize search solutions using Elasticsearch. The ideal candidate will have strong experience in managing Elasticsearch clusters, implementing search functionalities, and integrating Elasticsearch with various applications. Key Responsibilities: Design, implement, and maintain Elasticsearch clusters to support large-scale search applications. Develop, optimize, and maintain custom search queries, aggregations, and indexing strategies . Work with data pipelines , including ingestion, transformation, and storage of structured and unstructured data. Integrate Elasticsearch with web applications, APIs, and other data storage systems . Implement scalability, performance tuning, and security best practices for Elasticsearch clusters. Troubleshoot search performance issues and enhance the relevance and efficiency of search results. Work with Kibana , Logstash, and Beats for visualization and data analysis. Collaborate with developers, data engineers, and DevOps teams to deploy and maintain search infrastructure. Stay updated on the latest Elasticsearch features, plugins, and best practices. Primary Skills Strong experience with Elasticsearch (versions 7.x/8.x) and related tools (Kibana, Logstash, Beats). Proficiency in writing complex Elasticsearch queries, aggregations, and analyzers . Experience with full-text search, relevance tuning, and ranking algorithms . Knowledge of indexing, mapping, and schema design for optimal search performance. Proficiency in Python, Java, or Node.js for developing search applications. Experience with RESTful APIs and integrating Elasticsearch with various platforms. Familiarity with distributed systems, clustering, and high-availability configurations . Hands-on experience with Docker, Kubernetes, and cloud platforms (AWS, Azure, GCP) is a plus. Strong problem-solving skills and ability to troubleshoot performance bottlenecks. Preferred Qualifications: Experience with machine learning-based search ranking and recommendation systems. Knowledge of vector search and Elasticsearch's kNN capabilities . Understanding of security best practices , including authentication and role-based access. Familiarity with log analytics and monitoring tools . Education: Bachelors/Masters degree in Computer Science, Information Technology, or a related field.
Posted 2 months ago
4 - 9 years
7 - 11 Lacs
Hyderabad
Work from Office
Primary Skills 1.Java (8/11/17+) Strong expertise in Core Java, multithreading, collections, and functional programming. 2.Spring Boot Hands-on experience with Spring Boot for developing RESTful microservices. 3.Microservices Architecture Understanding of microservices design patterns, inter-service communication, and distributed systems. 4.Google Cloud Platform (GCP) Experience with Google Kubernetes Engine (GKE) for deploying and managing containerized applications, Cloud Run for running containerized applications in a serverless environment, Cloud Functions for serverless function execution, Cloud Pub/Sub for event-driven communication, and Firestore / Cloud SQL for working with NoSQL and relational databases on GCP. 5.Containers & Docker Experience in containerizing applications using Docker and managing images. 6.Kubernetes (GKE Preferred) Strong knowledge of Pods, Deployments, Services, ConfigMaps, Secrets, and Helm Charts for Kubernetes resource management. 7.RESTful APIs Experience in designing, building, and consuming REST APIs with security best practices. 8.CI/CD Pipelines Hands-on experience with Jenkins, GitHub Actions, GitLab CI/CD, or Google Cloud Build for automated testing and deployment of microservices. 9.Cloud Networking Understanding of VPCs, Load Balancers, and Service Mesh (Istio). 10.SQL & NoSQL Databases Experience with PostgreSQL, MySQL, Firestore, or MongoDB. 11.Logging & Monitoring Familiarity with Google Cloud Logging (Stackdriver), Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana). Secondary Skills Infrastructure as Code (IaC) Terraform for GCP infrastructure automation. Event-Driven Architecture Working knowledge of Kafka, Pub/Sub, or RabbitMQ. Security Best Practices Authentication/Authorization using OAuth2, JWT, and IAM roles. Testing Frameworks JUnit, Mockito, and integration testing for microservices. GraphQL Exposure to GraphQL API development. Agile Methodologies Experience working in Agile/Scrum teams. Performance Tuning Experience optimizing application performance and memory management. Multi-Cloud Exposure Knowledge of AWS or Azure is a plus. DevSecOps Exposure to security scanning tools like Snyk, SonarQube, and OWASP best practices. API Management Experience with API Gateways like Apigee or Kong is beneficial.
Posted 2 months ago
0 years
0 Lacs
Bengaluru, Karnataka
Work from Office
About this opportunity: A&AI (SL IT & ADM) team is currently seeking a versatile and motivated DevOps Engineer (with expertise in Kubernetes and Cloud Infrastructure) to join the AI/ML team. This role will be pivotal in managing multiple platforms and systems, focusing on Kubernetes, ELK/Opensearch, and various DevOps tools to ensure seamless data flow for our machine learning and data science initiatives. The ideal candidate should have a strong foundation in Python programming, experience with Elasticsearch, Logstash, and Kibana (ELK), proficiency in MLOps, and expertise in machine learning model development and deployment. Additionally, familiarity with basic Spark concepts and visualization tools like Grafana and Kibana is desirable. What you will do: Design and implement robust AI/ML infrastructure using cloud services and Kubernetes to support machine learning operations (MLOps) and data processing workflows. Deploy, manage, and optimize Kubernetes clusters specifically tailored for AI/ML workloads, ensuring optimal resource allocation and scalability across different network configurations. Develop and maintain CI/CD pipelines tailored for continuous training and deployment of machine learning models, integrating tools like Kubeflow, MLflow, ArgoFlow or TensorFlow Extended (TFX). Collaborate with data scientists to oversee the deployment of machine learning models and set up monitoring systems to track their performance and health in production. Design and implement data pipelines for large-scale data ingestion, processing, and analytics essential for machine learning models, utilizing distributed storage and processing technologies such as Hadoop, Spark, and Kafka. . The skills you bring: Extensive experience with Kubernetes and cloud services (AWS, Azure, GCP, private cloud) with a focus on deploying and managing AI/ML environments. Strong proficiency in scripting and automation using languages like Python, Bash, or Perl. Experience with AI/ML tools and frameworks (TensorFlow, PyTorch, Scikit-learn) and MLOps tools (Kubeflow, MLflow, TFX). In-depth knowledge of data pipeline and workflow management tools, distributed data processing (Hadoop, Spark), and messaging systems (Kafka, RabbitMQ). Expertise in implementing CI/CD pipelines, infrastructure as code (IaC), and configuration management tools. Familiarity with security standards and data protection regulations relevant to AI/ML projects. Proven ability to design and maintain reliable and scalable infrastructure tailored for AI/ML workloads. Excellent analytical, problem-solving, and communication skills. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 766746
Posted 2 months ago
0 years
0 Lacs
Bengaluru, Karnataka
Work from Office
About this opportunity: This position plays a crucial role in the development of Python-based solutions, their deployment within a Kubernetes-based environment, and ensuring the smooth data flow for our machine learning and data science initiatives. The ideal candidate will possess a strong foundation in Python programming, hands-on experience with ElasticSearch, Logstash, and Kibana (ELK), a solid grasp of fundamental Spark concepts, and familiarity with visualization tools such as Grafana and Kibana. Furthermore, a background in ML Ops and expertise in both machine learning model development and deployment will be highly advantageous What you will do: Generative AI & LLM Development, 12-15 Yrs of experience as Enterprise Software Architect with strong hands-on experience Strong hands-on experience in Python and microservice architecture concepts and development Expertise in crafting technical guides, architecture designs for AI platform Experience in Elastic Stack , Cassandra or any Big Data tool Experience with advance distributed systems and tooling, for example, Prometheus, Terraform, Kubernetes, Helm, Vault, CI/CD systems. Prior experience to build multiple AI/ML based models and deployed the models into production environment and creating the data pipelines Experience in guiding teams working on AI, ML, BigData and Analytics Strong understanding of development practices like architecture design, coding, test and verification. Experience with delivering software products, for example release management, documentation What you will Bring: Python Development: Write clean, efficient, and maintainable Python code to support data engineering tasks, including data collection, transformation, and integration with machine learning models. Data Pipeline Development: Design, develop, and maintain robust data pipelines that efficiently gather, process, and transform data from various sources into a format suitable for machine learning and data science tasks using ELK stack, Python and other leading technologies. Spark Knowledge: Apply basic Spark concepts for distributed data processing when necessary, optimizing data workflows for performance and scalability. ELK Integration: Utilize ElasticSearch, Logstash, and Kibana (ELK) for data management, data indexing, and real-time data visualization. Knowledge of OpenSearch and related stack would be beneficial. Grafana and Kibana: Create and manage dashboards and visualizations using Grafana and Kibana to provide real-time insights into data and system performance. Kubernetes Deployment: Deploy data engineering solutions and machine learning models to a Kubernetes-based environment, ensuring security, scalability, reliability, and high availability. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 766747
Posted 2 months ago
0 - 6 years
0 Lacs
Mumbai, Maharashtra
Work from Office
Work location: Mumbai Interview locaiton: Pune Interview date: 15th Feb 25 L2- 4 to 6 years experience Job Description- Must have hands on experience of working on Elasticsearch, Logstash, Kibana, Prometheus and Grafana monitoring system. Experience on installation, upgrade and managing of ELK, Prometheus and Grafana system. Hands-on experience with ELK, Prometheus, Grafana Administration, Configuration, Performance Tuning and Troubleshooting Knowledge of various clustering topologies like Redundant Assignments, Active-Passive setups etc.,and two or more Cloud Platforms (e.g.: AWS EC2 & Azure) for deploying the clusters. Experience on Logstash pipeline design and management, search index optimization and tuning. Implement security measures and ensure compliance with security policies and procedures like CIS benchmark. Collaborate with other teams to ensure seamless integration of environment with other systems. Create and maintain documentation related to the environment. Key Skills – Certified in monitoring system like ELK. Certified in RHCSA/RHCE Experience on Linux Platform. Must have knowledge on Monitoring tools such as Prometheus, Grafana, ELK stack, ManageEngine or any APM tool. Educational Qualifications- Bachelor’s degree in computer Science, Information Technology, or related field. Job Location: Mumbai
Posted 2 months ago
7 - 12 years
45 - 65 Lacs
Pune
Work from Office
We're hiring a Senior Backend Python Developer (7+ yrs )to build scalable, AI-powered systems using Django/Flask/FastAPI, GCP, Kubernetes & GraphQL. Design APIs, drive architecture, mentor teams & integrate ML for high-performance platforms.
Posted 2 months ago
4 - 8 years
900 - 1000 Lacs
Chennai
Remote
Join us and be a part of this journey as we write customer success stories about these products. WHAT YOU DO Interface with business customers, gathering and understanding requirements. Interface with customer and Genesys data science teams in discovery, extraction, loading, data transformation, and analysis of results. Define and utilize data intuition process to cleanse and verify the integrity of customer & Genesys data to be used for analysis. Implement, own, and improve data pipelines using best practices in data modeling, ETL/ELT processes. Build, improve, and provide ongoing optimization of high-quality models. Work with PS & Engineering to deliver specific customer requirements and report back customer feedback, issues, and feature requests. Continuous improvement in reporting, analysis, and overall process. Visualize, present, and demonstrate findings as required. Perform knowledge transfer to customer and internal teams. Communicate within the global community respecting cultural, language, and time zone variations. Demonstrate flexibility to adjust working hours to match customer and team interactions. ABOUT YOU Bachelors / Masters degree in quantitative field (e.g., Computer Science, Statistics, Engineering) 5+ years of relevant experience in Data Science or Data Engineering 5+ years of hands-on experience in Elasticsearch, Kibana, and real-time analytics solution development Hands-on application development experience in AWS/Azure and experience in Snowflake, Tableau, or Power BI Expertise with major statistical & analytical software like Python, R, SAS Good working knowledge on any programming language like Java, NodeJS. Application development background of using any contact center product suites such as Genesys, Avaya, Cisco etc. is an added advantage Expertise with data modeling, data warehousing, and ETL/ELT development Expertise with database solutions such as SQL, MongoDB, Redshift, Hadoop, Hive Proficiency with REST API, JSON, AWS Experience in working and delivering projects independently. Ability to multi-task and context switch between projects and tasks Curiosity, passion, and drive for data queries, analysis, quality, and models Excellent communication, initiative, and coordination skills with great attention to detail. Ability to explain and discuss complex topics with both experts and business leaders.
Posted 2 months ago
2 - 7 years
4 - 9 Lacs
Pune
Work from Office
Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : Splunk Good to have skills : NA Minimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Please refer below details for Position Name - SPLUNK/ELK Developer – Professional & Technical Skills: Must Have Skills:Proficiency in Splunk & ELK Administration and Development Must Have Skills:Hands on Experience with ELK Stack components (Elasticsearch, Logstash, Kibana) and their seamless integration Log management:Utilize the ELK stack to collect, process and analyze log data ,ensuring efficient log management and searchability Familiarity with Kibana dashboard creation, Health checks ,Linux system administration, Shell scripting and any one cloud platform Develop fields extraction, lookups, and data transformations to ensure accurate and meaningful data analysis Creating dashboards, alerts, saved searches, lookups, macros, field extractions, field transformations, tags, event types. Experience in architecting and administering Splunk distributed environments with components like Universal/Heavy Forwarders, Indexers, Cluster Masters, Deployment Servers , Search Head, License Master and Search Head Cluster. Manage and edit various .conf files such as index.conf, input.conf, output.conf, props.conf, transform.conf, server.conf Experience on log parsing, complex Splunk searches, external table lookup Create and manage KPIs, Glass Tables, and Service Health Scores to provide real-time visibility into IT operations Installed, configured, and maintained Splunk, Splunk Add Ons and Apps Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: The candidate should have a minimum of 2 years of experience in Splunk. This position is based at our Pune office. A 15 years full time education is required. Qualifications 15 years full time education
Posted 2 months ago
0 years
0 Lacs
Hyderabad, Telangana
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a talented Solutions Architect with a keen interest in Amazon Connect-based contact center solutions and telephony. The ideal candidate will have foundational knowledge in contact center technologies and a solid desire to learn and grow alongside industry experts. This role offers the opportunity to work on cutting-edge projects involving AI, GenAI, and cloud-based platforms Primary Responsibilities: Solution Design Support Assist in designing and developing Amazon Connect-based contact center solutions Contribute to focus areas such as Product Development, Data & Analytics, Routing, Desktop/CTI, WFM/WFO, and SBC/Telephony Participate in integrating AI and GenAI technologies into contact center solutions Technical Contribution Work hands-on to create accelerators and tools that enhance the productivity of engineering and delivery teams Support the collection of requirements and assist in converting them into technical specifications in collaboration with engineering and delivery teams Problem Solving & Support Help address production and non-production issues by providing timely support and solutions Participate in the end-to-end process from feature grooming to Day 2 support Collaboration & Communication Collaborate with cross-functional teams to understand project requirements and deliverables Clearly articulate ideas and technical concepts in both written and verbal formats Utilize design tools like Draw.io, PlantUML, Mermaid, PowerPoint, Miro, and Figma to create documentation and diagrams Learning & Development Develop a deep understanding of application, technology, and data architecture principles Acquire knowledge of protocol stacks and data entities relevant to contact center technologies Stay updated with industry standards and technologies from vendors like Amazon, Google, Microsoft, Oracle, etc. Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Completion of Graduate degree Hands-on experience with Cloud native tech stack: Experience working with diverse technologies: Java, Public Cloud (Azure), Cloud (Docker, Microservices/SpringBoot), RDBMS (MySQL) + nosql (Cassandra, MongoDB, Elastic), APIs (REST, Graph QL), API gateways (Kong etc.), Data Streaming (Kafka), Visualization (Grafana, Kibana), ELK stack (Elastic, Logstash, and Kibana); API Gateway, Gen AI, AI/ML Experience in solution architecture with a focus on contact center technologies and telephony systems. This could include experience as full stack engineer in the mentioned platforms Experience with design and documentation tools such as Draw.io, PlantUML, Mermaid, PowerPoint, Word, Excel, Miro, and Figma Basic understanding of application development and architecture principles Proven solid communication and interpersonal skills Proven ability to articulate thoughts clearly and effectively in written and verbal communication Proven eagerness to learn and adapt to new technologies and methodologies Proven analytical mindset with problem-solving abilities Preferred Qualifications: Certifications such as AWS Certified Cloud Practitioner or AWS Certified Developer - Associate Experience with agile development processes and collaboration tools Exposure to AI and GenAI technologies Knowledge of cloud platforms (AWS, Azure, Google Cloud) and basic AI concepts Familiarity with agile methodologies and DevOps practices At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough