Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
You will be a Cloud Engineer at PAC Panasonic Avionics Corporation based in Pune, India. Your primary responsibility will be to modernize the legacy SOAP-based Airline Gateway (AGW) by building a cloud-native, scalable, and traceable architecture using AWS, Python, and DevOps practices. This will involve migrating from legacy SOAP APIs to modern REST APIs, implementing CI/CD pipelines, containerization, and automation processes to enhance system performance and reliability. Your role will also include backend development, networking, and cloud-based solutions to contribute to scalable and efficient applications. As a Cloud Engineer, your key responsibilities will include designing, building, and deploying cloud-native solutions on AWS, with a focus on migrating from SOAP-based APIs to RESTful APIs. You will develop and maintain backend services and web applications using Python for integration with cloud services and systems. Implementing CI/CD pipelines, automation, and containerization using tools like Docker, Kubernetes, and Terraform will be crucial aspects of your role. You will also utilize Python for backend development, including writing API services, handling business logic, and managing integrations with databases and AWS services. Ensuring scalability, security, and high availability of cloud systems will be essential, along with implementing monitoring and logging solutions for real-time observability. Collaboration with cross-functional teams to integrate cloud-based solutions and deliver high-quality, reliable systems will also be part of your duties. To excel in this role, you should have experience with AWS cloud services and cloud architecture, including EC2, S3, Lambda, API Gateway, RDS, VPC, IAM, CloudWatch, among others. Strong backend development experience with Python, proficiency in building and maintaining web applications and backend services, and solid understanding of Python web frameworks like Flask, Django, or FastAPI are required. Experience with database integration, DevOps tools, RESTful API design, and cloud security best practices is essential. Additionally, familiarity with monitoring tools and the ability to manage cloud infrastructure and deliver scalable solutions are crucial skills for this position. The ideal candidate for this role would have 3 to 5 years of experience and possess additional skills such as experience with airline industry systems, AWS certifications, familiarity with serverless architectures and microservices, and strong problem-solving abilities. If you are passionate about cloud engineering, have a strong background in Python development, and are eager to contribute to the modernization of legacy systems using cutting-edge technologies, we welcome your application for this exciting opportunity at PAC Panasonic Avionics Corporation.,
Posted 6 days ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
This position requires an experienced professional with a hands-on EPM infrastructure administration background in the Oracle Hyperion suite of products. Your primary responsibilities will include managing day-to-day operations, providing end-user support, and supporting project initiatives across the EPM platform. You will collaborate closely with functional teams to address complex business issues and design scalable system solutions. Your role will be crucial in establishing a "best-in-class" EPM solution for the company. As the ideal candidate, you should possess a deep understanding of Hyperion concepts and their practical applications in a production business environment. You will be responsible for supporting critical business operations and delivering quality services independently to multiple customer engagements. Your duties will involve analyzing functional and operational issues within the Oracle EPM/Hyperion environment, as well as developing, implementing, and supporting Oracle's global infrastructure. Key Responsibilities: - Analyzing and resolving functional and operational issues within the Oracle EPM/Hyperion environment - Managing operations and end-user support for various Hyperion EPM 11x platform components - Troubleshooting integration and data issues across Hyperion applications and boundary systems - Coordinating operational handover with global support teams - Assisting business partners with application configuration and administration - Monitoring and streamlining tasks and procedures in line with company standards - Collaborating with technical infrastructure teams, application teams, and business units to ensure service delivery meets business objectives - Establishing and implementing standard operational policies and procedures - Coordinating patching activities and monthly deployments with infrastructure teams - Monitoring and optimizing Hyperion system performance - Modifying existing scripts and consolidation/business rules as needed - Participating in system testing, data validations, and stress testing of new features - Managing daily terminations from Hyperion systems per SOX requirements - Creating technical documentation and updating end-user training materials - Providing business continuity and disaster recovery solutions for the Hyperion environment - Improving technical processes and developing new processes to enhance Oracle support for customers - Collaborating with project managers and customer implementation teams to ensure project success - Providing mentoring and cross-training to team members - Setting up new releases of the Hyperion application in the lab environment - Collaborating with offshore team members and global business customers - Demonstrating strong customer management skills and adherence to processes Requirements: - Knowledge of Hyperion/EPM 11.1.2.4, 11.2.x - Experience with various Hyperion EPM components is essential - Hands-on experience with Hyperion installation, upgrading, and migration - Knowledge of Oracle database, WebLogic application server, and performance tuning - Experience with cloud EPM solutions is advantageous - Strong understanding of Hyperion system tools and Windows Server technologies - Bachelor's Degree in Computer Science, Engineering, MIS, or related field Desired Skills: - Knowledge of ServiceNow, SRs, RFCs, and My Oracle Support - Familiarity with OAC-Essbase, Essbase 19c, and Essbase 21c - Implementation experience in DRM - EPMA to DRM migration for Hyperion 11.1.2.x to 11.2.x If you meet the requirements and possess the desired skills, this role offers an opportunity to contribute to the success of the company's EPM platform and work in a challenging yet rewarding environment. Your ability to collaborate effectively, communicate clearly, and drive technical excellence will be key to your success in this role.,
Posted 6 days ago
10.0 years
0 Lacs
Greater Kolkata Area
Remote
Java Back End Engineer with AWS Location : Remote Experience : 10+ Years Employment Type : Full-Time Job Overview We are looking for a highly skilled Java Back End Engineer with strong AWS cloud experience to design and implement scalable backend systems and APIs. You will work closely with cross-functional teams to develop robust microservices, optimize database performance, and contribute across the tech stack, including infrastructure automation. Core Responsibilities Design, develop, and deploy scalable microservices using Java, J2EE, Spring, and Spring Boot. Build and maintain secure, high-performance APIs and backend services on AWS or GCP. Use JUnit and Mockito to ensure test-driven development and maintain code quality. Develop and manage ETL workflows using tools like Pentaho, Talend, or Apache NiFi. Create High-Level Design (HLD) and architecture documentation for system components. Collaborate with cross-functional teams (DevOps, Frontend, QA) as a full-stack contributor when needed. Tune SQL queries and manage performance on MySQL and Amazon Redshift. Troubleshoot and optimize microservices for performance and scalability. Use Git for source control and participate in code reviews and architectural discussions. Automate infrastructure provisioning and CI/CD processes using Terraform, Bash, and pipelines. Primary Skills Languages & Frameworks : Java (v8/17/21), Spring Boot, J2EE, Servlets, JSP, JDBC, Struts Architecture : Microservices, REST APIs Cloud Platforms : AWS (EC2, S3, Lambda, RDS, CloudFormation, SQS, SNS) or GCP Databases : MySQL, Redshift Secondary Skills (Good To Have) Infrastructure as Code (IaC) : Terraform Additional Languages : Python, Node.js Frontend Frameworks : React, Angular, JavaScript ETL Tools : Pentaho, Talend, Apache NiFi (or equivalent) CI/CD & Containers : Jenkins, GitHub Actions, Docker, Kubernetes Monitoring/Logging : AWS CloudWatch, DataDog Scripting : Bash, Shell scripting Nice To Have Familiarity with agile software development practices Experience in a cross-functional engineering environment Exposure to DevOps culture and tools (ref:hirist.tech)
Posted 6 days ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Name: Infrastructure Security Engineer Location- Onsite- Ahmedabad Job Type- Full Time Position Overview We are seeking an experienced Infrastructure Security Engineer to join our cybersecurity team and play a critical role in protecting our organization's digital infrastructure. This position requires a versatile security professional who can operate across multiple domains including cloud security, vulnerability management/patch management, endpoint protection, and security operations. Key Responsibilities AWS Cloud Security Design, implement, and maintain security controls across AWS environments including IAM policies, security groups, NACLs, and VPC configurations Configure and manage AWS security services such as CloudTrail, GuardDuty, Security Hub, Config, and Inspector Implement Infrastructure as Code (IaC) security best practices using CloudFormation, Terraform, or CDK Conduct regular security assessments of cloud architectures and recommend improvements Manage AWS compliance frameworks and ensure adherence to industry standards (SOC 2, ISO 27001, etc.) Vulnerability Management Lead enterprise-wide vulnerability assessment programs using tools such as Nessus Develop and maintain vulnerability and patch management policies, procedures, and SLAs, regular reporting Coordinate with IT and development teams to prioritize and remediate security vulnerabilities Generate executive-level reports on vulnerability metrics and risk exposure Conduct regular penetration testing and security assessments of applications and infrastructure Patch Management Design and implement automated patch management strategies across Windows, Linux, and cloud environments Coordinate with system administrators to schedule and deploy critical security patches Maintain patch testing procedures to minimize business disruption Monitor patch compliance across the enterprise and report on patch deployment status Develop rollback procedures and incident response plans for patch-related issues Endpoint Security Deploy and manage endpoint detection and response (EDR) solutions such as CrowdStrike Configure and tune endpoint security policies including antivirus, application control, and device encryption Investigate and respond to endpoint security incidents and malware infections Implement mobile device management (MDM) and bring-your-own-device (BYOD) security policies Conduct forensic analysis of compromised endpoints when required Required Qualifications Education & Experience Bachelor's degree in computer science, Information Security, or related field Minimum 5+ years of hands-on experience in information security roles 3+ years of experience with AWS cloud security architecture and services Technical Skills Cloud Security: Deep expertise in AWS security services, IAM, VPC security, and cloud compliance frameworks Vulnerability Management: Proficiency with vulnerability scanners (Qualys, Nessus, Rapid7) and risk assessment methodologies Patch Management: Experience with automated patching tools (WSUS, Red Hat Satellite, AWS Systems Manager) Endpoint Security: Hands-on experience with EDR/XDR platforms and endpoint management tools SIEM/SOAR: Advanced skills in log analysis, correlation rule development, and security orchestration Operating Systems: Strong knowledge of Windows and Linux security hardening and administration Security Certifications (Preferred) AWS Certified Security - Specialty CISSP (Certified Information Systems Security Professional) GCIH (GIAC Certified Incident Handler) CEH (Certified Ethical Hacker) Key Competencies Strong analytical and problem-solving skills with attention to detail Excellent communication skills and ability to explain complex security concepts to technical and non-technical stakeholders Project management capabilities with experience leading cross-functional security initiatives Ability to work in fast-paced environments and manage multiple priorities Strong understanding of regulatory compliance requirements (PCI-DSS, HIPAA, SOX, GDPR) Experience with risk assessment frameworks and security governance Reporting Structure This position reports to the Engineering Manager Cyber Security and collaborates closely with IT Operations, Development Teams.
Posted 6 days ago
3.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Full Time Ahmedabad/GiftCity Posted 13 seconds ago Website Trading Technologies Multi-asset platform for capital markets The Site Reliability Engineer (SRE) position is a software development-oriented role, focusing heavily on coding, automation, and ensuring the stability and reliability of our global platform. The ideal candidate will primarily be a skilled software developer capable of participating in on-call rotations. The SRE team develops sophisticated telemetry and automation tools, proactively monitoring platform health and executing automated corrective actions. As guardians of the production environment, the SRE team leverages advanced telemetry to anticipate and mitigate issues, ensuring continuous platform stability. What Will You Be Involved With? Develop and maintain advanced telemetry and automation tools for monitoring and managing global platform health. Actively participate in on-call rotations, swiftly diagnosing and resolving system issues and escalations from the customer support team (this is not a customer-facing role). Implement automated solutions for incident response, system optimization, and reliability improvement. Proactively identify potential system stability risks and implement preventive measures. What Will You Bring to the Table? Software Development 3+ years of professional Python development experience. Strong grasp of Python object-oriented programming concepts and inheritance. Experience developing multi-threaded Python applications. 2+ years of experience using Terraform, with proficiency in creating modules and submodules from scratch. Proficiency or willingness to learn Golang. Operating Systems Experience with Linux operating systems. Strong understanding of monitoring critical system health parameters. Cloud 3+ years of hands-on experience with AWS services including EC2, Lambda, CloudWatch, EKS, ELB, RDS, DynamoDB, and SQS. AWS Associate-level certification or higher preferred. Networking Basic understanding of network protocols: TCP/IP DNS HTTP Load balancing concepts Additional Qualifications (Preferred) Familiarity with trading systems and low-latency environments is advantageous but not required. What We Bring to the Table Compensation: ₹2,000,000 – ₹2,980,801 / year We offer a comprehensive benefits package designed to support your well-being, growth, and work-life balance. Health & Financial Security: Medical, Dental, and Vision coverage Group Life (GTL) and Group Income Protection (GIP) schemes Pension contributions Time Off & Flexibility: Enjoy the best of both worlds: the energy and collaboration of in-person work, combined with the convenience and focus of remote days. This is a hybrid position requiring three days of in-office collaboration per week, with the flexibility to work remotely for the remaining two days. Our hybrid model is designed to balance individual flexibility with the benefits of in-person collaboration, enhanced team cohesion, spontaneous innovation, hands-on mentorship opportunities and strengthens our company culture. 25 days of Paid Time Off (PTO) per year, with the option to roll over unused days. One dedicated day per year for volunteering. Two professional development days per year to allow uninterrupted professional development. An additional PTO day added during milestone anniversary years. Robust paid holiday schedule with early dismissal. Generous parental leave for all parents (including adoptive parents). Work-Life Support & Resources: Budget for tech accessories, including monitors, headphones, keyboards, and other office equipment. Milestone anniversary bonuses. Wellness & Lifestyle Perks: Subsidy contributions toward gym memberships and health/wellness initiatives (including discounted healthcare premiums, healthy meal delivery programs, or smoking cessation support). Our Culture: Forward-thinking, culture-based organization with collaborative teams that promote diversity and inclusion. Trading Technologies is a Software-as-a-Service (SaaS) technology platform provider to the global capital markets industry. The company’s award-winning TT® platform connects to the world’s major international exchanges and liquidity venues in listed derivatives alongside a growing number of asset classes, including fixed income and cryptocurrencies. The TT platform delivers advanced tools for trade execution and order management, market data solutions, analytics, trade surveillance, risk management, and infrastructure services to the world’s leading sell-side institutions, buy-side firms, and exchanges. The company’s blue-chip client base includes Tier 1 banks as well as brokers, money managers, hedge funds, proprietary traders, Commodity Trading Advisors (CTAs), commercial hedgers, and risk managers. These firms rely on the TT ecosystem to manage their end-to-end trading operations. In addition, exchanges utilize TT’s technology to deliver innovative solutions to their market participants. TT also strategically partners with technology companies to make their complementary offerings available to Trading Technologies’ global client base through the TT ecosystem. Trading Technologies (TT) is an equal-opportunity employer. Equal employment has been, and continues to be, a required practice at the Company. Trading Technologies’ practice of equal employment opportunity is to recruit, hire, train, promote, and base all employment decisions on ability rather than race, color, religion, national origin, sex/gender orientation, age, disability, sexual orientation, genetic information or any other protected status. Additionally, TT participates in the E-Verify Program for US offices. To apply for this job please visit tradingtechnologies.pinpointhq.com
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
As an Infrastructure Technical Architect at Salesforce Professional Services, you will play a crucial role in enabling customers to leverage MuleSoft platforms while guiding and mentoring a dynamic team. Your expertise and leadership will help establish you as a subject-matter expert in a company dedicated to innovation. You will bring to the table experience in container technology such as Docker and Kubernetes, as well as proficiency in configuring IaaS services on major cloud providers like AWS, Azure, or GCP. Your strong infrastructure automation skills, including familiarity with tools like Terraform and AWS Cloud Formation, will be key in driving success in this role. Additionally, your knowledge of networking, Linux, systems programming, distributed systems, databases, and cloud computing will be valuable assets. In this role, you will engage with high-level command-line interface-written code languages such as Java or C++, along with dynamic languages like Ruby or Python. Your experience in production-level environments will be essential in providing innovative solutions to complex challenges. Preferred qualifications for this role include certifications in Cloud Architecture or Solution Architecture (AWS, Azure, GCP), as well as expertise in Kubernetes and MuleSoft platforms. Experience with DevSecOps, Gravity/Gravitational, Redhat OpenShift, and operators will be advantageous. Your track record of architecting and implementing highly available, scalable, and secure infrastructure will set you apart as a top candidate. Your ability to troubleshoot effectively, along with hands-on experience in performance testing and tuning, will be critical in delivering high-quality solutions. Strong communication skills and customer-facing experience will be essential in managing expectations and fostering positive relationships with clients. If you are passionate about driving innovation and making a positive impact through technology, this role offers a unique opportunity to grow your career and contribute to transformative projects. Join us at Salesforce, where we empower you to be a Trailblazer and shape the future of business.,
Posted 6 days ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 10+ years. Strong working experience in Data Engineering and Big Data platforms. Hands-on experience with Python and PySpark. Expertise with AWS Glue, including Crawlers and Data Catalog. Hands-on experience with Snowflake. Strong understanding of AWS services: S3, Lambda, Athena, SNS, Secrets Manager. Experience with Infrastructure-as-Code (IaC) tools like CloudFormation and Terraform. Strong experience with CI/CD pipelines, preferably using GitHub Actions. Working knowledge of Agile methodologies, JIRA, and GitHub version control. Experience with data quality frameworks and observability. Exposure to data governance tools and practices. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code. Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the clients’ requirements. Defining guidelines and benchmarks for NFR considerations during project implementation. Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers. Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it. Understanding and relating technology integration scenarios and applying these learnings in projects. Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a member of our dynamic team, you will play a pivotal role in revolutionizing customer relationship management (CRM) by leveraging advanced artificial intelligence (AI) capabilities. The groundbreaking partnership between Salesforce and Google Cloud, valued at $2.5 billion, aims to enhance customer experiences through the integration of Google's Gemini AI models into Salesforce's Agentforce platform. By enabling businesses to utilize multi-modal AI capabilities for processing images, audio, and video, we are paving the way for unparalleled customer interactions. Join us in advancing the integration of Salesforce applications on the Google Cloud Platform (GCP). This is a unique opportunity to work at the forefront of identity provider (IDP), AI, and cloud computing, contributing to the development of a comprehensive suite of Salesforce applications on GCP. You will be instrumental in building a platform on GCP to facilitate agentic solutions on Salesforce. Our Public Cloud engineering teams are at the forefront of innovating and maintaining a large-scale distributed systems engineering platform. Responsible for delivering hundreds of features daily to tens of millions of users across various industries, our teams ensure high reliability, speed, security, and seamless preservation of customizations and integrations with each deployment. If you have deep experience in concurrency, large-scale systems, data management, high availability solutions, and back-end system optimization, we want you on our team. Your Impact: - Develop cloud infrastructure automation tools, frameworks, workflows, and validation platforms on public cloud platforms like AWS, GCP, Azure, or Alibaba - Design, develop, debug, and operate resilient distributed systems spanning thousands of compute nodes across multiple data centers - Utilize and contribute to open-source technologies such as Kubernetes, Argo, etc. - Implement Infrastructure-as-Code using Terraform - Create microservices on containerization frameworks like Kubernetes, Docker, Mesos - Resolve complex technical issues, drive innovations to enhance system availability, resilience, and performance - Maintain a balance between live-site management, feature delivery, and technical debt retirement - Participate in on-call rotation to address real-time complex problems and ensure services are operational and highly available Required Skills: - Proficiency in Terraform, Kubernetes, or Spinnaker - Deep knowledge of programming languages such as Java, Golang, Python, or Ruby - Working experience with Falcon - Ownership and operation of critical service instances - Experience with Agile development and Test Driven Development methodologies - Familiarity with essential infrastructure services including monitoring, alerting, logging, and reporting applications - Preferred experience with Public Cloud platforms If you are passionate about cutting-edge technologies, thrive in a fast-paced environment, and are eager to make a significant impact in the world of CRM and AI, we welcome you to join our team.,
Posted 6 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for contributing to the development and deployment of machine learning algorithms. Evaluates accuracy and functionality of machine learning algorithms as a part of a larger team. Contributes to translating application requirements into machine learning problem statements. Analyzes and evaluates solutions both internally generated as well as third party supplied. Contributes to developing ways to use machine learning to solve problems and discover new products, working on a portion of the problem and collaborating with more senior researchers as needed. Works with moderate guidance in own area of knowledge. Job Description Core Responsibilities About the Role: We are seeking an experienced Data Scientist to join our growing Operational Intelligence team. You will play a key role in building intelligent systems that help reduce alert noise, detect anomalies, correlate events, and proactively surface operational insights across our large-scale streaming infrastructure. You’ll work at the intersection of machine learning, observability, and IT operations, collaborating closely with Platform Engineers, SREs, Incident Managers, Operators and Developers to integrate smart detection and decision logic directly into our operational workflows. This role offers a unique opportunity to push the boundaries of AI/ML in large-scale operations. We welcome curious minds who want to stay ahead of the curve, bring innovative ideas to life, and improve the reliability of streaming infrastructure that powers millions of users globally. What You’ll Do Design and tune machine learning models for event correlation, anomaly detection, alert scoring, and root cause inference Engineer features to enrich alerts using service relationships, business context, change history, and topological data Apply NLP and ML techniques to classify and structure logs and unstructured alert messages Develop and maintain real-time and batch data pipelines to process alerts, metrics, traces, and logs Use Python, SQL, and time-series query languages (e.g., PromQL) to manipulate and analyze operational data Collaborate with engineering teams to deploy models via API integrations, automate workflows, and ensure production readiness Contribute to the development of self-healing automation, diagnostics, and ML-powered decision triggers Design and validate entropy-based prioritization models to reduce alert fatigue and elevate critical signals Conduct A/B testing, offline validation, and live performance monitoring of ML models Build and share clear dashboards, visualizations, and reporting views to support SREs, engineers, and leadership Participate in incident postmortems, providing ML-driven insights and recommendations for platform improvements Collaborate on the design of hybrid ML + rule-based systems to support dynamic correlation and intelligent alert grouping Lead and support innovation efforts including POCs, POVs, and exploration of emerging AI/ML tools and strategies Demonstrate a proactive, solution-oriented mindset with the ability to navigate ambiguity and learn quickly Participate in on-call rotations and provide operational support as needed Qualifications Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics or a related field 3+ years of experience building and deploying ML solutions in production environments 2+ years working with AIOps, observability, or real-time operations data Strong coding skills in Python (including pandas, NumPy, Scikit-learn, PyTorch, or TensorFlow) Experience working with SQL, time-series query languages (e.g., PromQL), and data transformation in pandas or Spark Familiarity with LLMs, prompt engineering fundamentals, or embedding-based retrieval (e.g., sentence-transformers, vector DBs) Strong grasp of modern ML techniques including gradient boosting (XGBoost/LightGBM), autoencoders, clustering (e.g., HDBSCAN), and anomaly detection Experience managing structured + unstructured data, and building features from logs, alerts, metrics, and traces Familiarity with real-time event processing using tools like Kafka, Kinesis, or Flink Strong understanding of model evaluation techniques including precision/recall trade-offs, ROC, AUC, calibration Comfortable working with relational (PostgreSQL), NoSQL (MongoDB), and time-series (InfluxDB, Prometheus) databases Ability to collaborate effectively with SREs, platform teams, and participate in Agile/DevOps workflows Clear written and verbal communication skills to present findings to technical and non-technical stakeholders Comfortable working across Git, Confluence, JIRA, & collaborative agile environments Nice To Have Experience building or contributing to the AIOps platform (e.g., Moogsoft, BigPanda, Datadog, Aisera, Dynatrace, BMC etc.) Experience working in streaming media, OTT platforms, or large-scale consumer services Exposure to Infrastructure as Code (Terraform, Pulumi) and modern cloud-native tooling Working experience with Conviva, Touchstream, Harmonic, New Relic, Prometheus, & event- based alerting tools Hands-on experience with LLMs in operational contexts (e.g., classification of alert text, log summarization, retrieval-augmented generation) Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) and embeddings-based search for observability data Experience using MLflow, SageMaker, or Airflow for ML workflow orchestration Knowledge of LangChain, Haystack, RAG pipelines, or prompt templating libraries Exposure to MLOps practices (e.g., model monitoring, drift detection, explainability tools like SHAP or LIME) Experience with containerized model deployment using Docker or Kubernetes Use of JAX, Hugging Face Transformers, or LLaMA/Claude/Command-R models in experimentation Experience designing APIs in Python or Go to expose models as services Cloud proficiency in AWS/GCP, especially for distributed training, storage, or batch inferencing Contributions to open-source ML or DevOps communities, or participation in AIOps research/benchmarking efforts Certifications in cloud architecture, ML engineering, or data science specialization Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 2-5 Years
Posted 6 days ago
10.0 - 14.0 years
0 Lacs
kolkata, west bengal
On-site
As a Senior Machine Learning Engineer with over 10 years of experience, you will play a crucial role in designing, building, and deploying scalable machine learning systems in production. In this role, you will collaborate closely with data scientists to operationalize models, take ownership of ML pipelines from end to end, and enhance the reliability, automation, and performance of our ML infrastructure. Your primary responsibilities will include designing and constructing robust ML pipelines and services for training, validation, and model deployment. You will work in collaboration with various stakeholders such as data scientists, solution architects, and DevOps engineers to ensure alignment with project goals and requirements. Additionally, you will be responsible for ensuring cloud integration compatibility with AWS and Azure, building reusable infrastructure components following best practices in DevOps and MLOps, and adhering to security standards and regulatory compliance. To excel in this role, you should possess strong programming skills in Python, have deep experience with ML frameworks such as TensorFlow, PyTorch, and Scikit-learn, and be proficient in MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience in deploying models using Docker and Kubernetes, familiarity with cloud platforms and ML services, and proficiency in data engineering tools are essential for success in this position. Additionally, knowledge of CI/CD, version control, and infrastructure as code along with experience in monitoring/logging tools will be advantageous. Good-to-have skills include experience with feature stores and experiment tracking platforms, knowledge of edge/embedded ML, model quantization, and optimization, as well as familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases and experience in leading cross-functional initiatives or mentoring junior engineers will also be beneficial. Joining Ericsson will provide you with an exceptional opportunity to leverage your skills and creativity to address some of the world's toughest challenges. You will be part of a diverse team of innovators who are committed to pushing the boundaries of innovation and crafting groundbreaking solutions. As a member of this team, you will be challenged to think beyond conventional limits and contribute to shaping the future of technology.,
Posted 6 days ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As an Azure DevOps Engineer, you will be responsible for managing and optimizing Azure cloud services with a focus on Azure App Services and related technologies. Your main tasks will include designing, implementing, and maintaining CI/CD pipelines for .NET applications using Azure DevOps Services. Additionally, you will work on Infrastructure as Code (IaC) using ARM templates, Bicep, or Terraform. Collaboration with development teams to integrate DevOps practices into the Software Development Life Cycle (SDLC) is an essential part of this role. Monitoring deployments, troubleshooting issues, and ensuring scalability, security, and cost optimization are also key responsibilities. To excel in this role, you are required to have 5-7 years of hands-on experience in Azure DevOps. Furthermore, a strong knowledge of Azure App Services, App Gateway, Key Vault, and Networking is essential. Proficiency in CI/CD pipelines for .NET applications, along with a good understanding of IaC, Git, YAML pipelines, and release management, will be beneficial. Strong troubleshooting skills and the ability to collaborate effectively with team members are also important traits for this position.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
delhi
On-site
We are looking for a highly skilled OCI Certified Solution Architect who can design, implement, and manage cloud solutions on Oracle Cloud Infrastructure (OCI). You should have a strong technical background, hands-on experience with OCI services, and the ability to translate business requirements into scalable, secure, and efficient cloud architectures. Your responsibilities will include designing and implementing cloud-based solutions leveraging Oracle Cloud Infrastructure services, providing architectural guidance on cloud adoption, migration, and deployment strategies, collaborating with cross-functional teams to define requirements and develop scalable cloud solutions, ensuring best practices for security, performance, and cost optimization across OCI environments, supporting migration projects from on-premises or other cloud platforms to OCI, establishing automation, CI/CD pipelines, and infrastructure as code (IaC) frameworks, monitoring system performance and troubleshooting issues proactively, documenting architecture designs, policies, and procedures, and staying up-to-date with OCI services, features, and industry trends. To be successful in this role, you should have the following qualifications and skills: - OCI Certified Solution Architect (Associate or Professional level). - Proven experience designing and deploying solutions on OCI. - Strong knowledge of cloud architecture, networking, security, and enterprise application deployment. - Experience with automation tools such as Terraform, Ansible, or equivalent. - Good understanding of containerization, Kubernetes (OKE), and DevOps practices. - Familiarity with databases, storage solutions, and disaster recovery strategies. - Excellent communication and stakeholder management skills. - Bachelor's degree in Computer Science, Information Technology, or related field; relevant certifications are a plus. Preferred qualifications include experience with hybrid cloud and multi-cloud strategies, knowledge of scripting languages (Python, Bash, etc.), and prior experience with migration projects to OCI.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You are seeking a skilled CI/CD Developer to join our dynamic team. Your job will involve designing, implementing, and maintaining CI/CD pipelines to ensure smooth and efficient deployment of applications built with PHP, Java, and PostgreSQL. You will be responsible for automating build, test, and deployment processes, integrating various tools and services, monitoring and troubleshooting CI/CD pipelines, collaborating with development, QA, and operations teams, managing version control systems, documenting CI/CD processes, and implementing security best practices. Your technical expertise should include proficiency in CI/CD tools such as Jenkins, GitLab CI, or CircleCI, strong knowledge of PHP and Java, experience with PostgreSQL and database migration tools, proficiency in scripting languages like Bash, Python, or Groovy, familiarity with version control systems like Git, automation tools such as Ansible or Terraform, containerization technologies like Docker, orchestration tools like Kubernetes, strong analytical and problem-solving skills, excellent verbal and written communication skills, and the ability to work effectively in a collaborative team environment. In this role, you will have the opportunity to lead business change, drive transformation, and work in a global, dynamic, and highly collaborative team. You will also benefit from a wide range of stellar benefits including health, dental, vision, and life insurance, paid time off, sick time, parental leave, and more. If you are looking to utilize your technical expertise in CI/CD processes and tools, collaborate with diverse teams, and contribute to the deployment of innovative applications, this role at Amdocs could be the perfect fit for you. Apply now to be part of a leading global organization that values diversity and inclusion in its workforce.,
Posted 6 days ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
You will be joining Cisco Secure Common Services Engineering, a team of cybersecurity experts and innovative engineers dedicated to supporting the products and developers across Cisco Security. The team prioritizes putting people first, taking ambitious steps together, and emphasizing transparency at every stage of the process. As part of our growing team, you will play a crucial role in taking Platform and Security to the next level, contributing to our continuous journey of growth and improvement. As a Senior Software Engineer within Common Services Platform Engineering, you will play a key role in providing the essential building blocks for the Cisco Security Cloud. Your primary focus will be to ensure that our products work seamlessly together, delivering an exceptional user experience for our customers and stakeholders within Cisco. You will approach challenges with fresh perspectives, iterating with a commitment to continuous improvement. Problem-solving and innovation will be central to your responsibilities, with a direct impact on a multi-billion-dollar market. Your role will involve leading design efforts, influencing and contributing to implementation, and ensuring the delivery of quality and timely releases aligned with business objectives, regulatory compliance, and future scalability. Collaboration will be a cornerstone of your work, as you engage with multiple teams including PM, UX, and Engineering Management. Your ability to simplify complex concepts, communicate effectively, and move swiftly to achieve business objectives will be essential. You will be tasked with identifying opportunities to enhance user experience, improve performance, and drive automation. Staying informed about emerging technologies and trends in Identity and Access Management (IAM) will be crucial, allowing you to incorporate advancements that future-proof the organization's identity framework. Additionally, you will have the opportunity to mentor engineers, guiding them to produce their best work and fostering a culture of continuous learning and growth. **Minimum Qualifications:** - 8+ years of hands-on experience in designing and developing scalable software products/services. - Strong expertise in developing and maintaining highly available, containerized microservices using GoLang and REST. - Proven experience with popular AWS services such as API Gateway and DynamoDB. - Familiarity with identity protocols (SAML, OAuth, OpenID Connect) and directory services (Active Directory, LDAP). - Experience in identifying and addressing performance bottlenecks in microservices. - Good knowledge of cloud security best practices and compliance standards (e.g., GDPR, HIPAA, FedRAMP). **Preferred Qualifications:** - Demonstrated compassion and strong problem-solving skills with excellent written and verbal communication abilities. - Knowledge of Zero Trust frameworks. - Experience with CI/CD and Terraform. - Experience in building and managing services with a 99.99% SLA. At Cisco, we believe in the power of connection and celebrate the diverse backgrounds and unique skills of our employees. We are committed to fostering an inclusive future for all, where learning and development are encouraged at every stage of your career. Our innovative technology, tools, and culture support hybrid work trends, enabling our employees to not only perform at their best but also be their best selves. As a global technology leader, we recognize the importance of bringing communities together, with our people at the heart of everything we do. Our employees actively participate in our 30 employee resource organizations, known as Inclusive Communities, to promote connection, belonging, allyship, and positive change. We provide dedicated paid time off for volunteering, allowing our employees to give back to causes they are passionate about. Join us at Cisco and be part of a purpose-driven organization that is shaping the future of technology and the internet. Our commitment to helping customers reimagine their applications, secure their enterprise, transform their infrastructure, and achieve sustainability goals drives us forward every day. Together, we are building a more inclusive future for all. Take the next step in your career and embrace your unique potential with us. For applicants applying to work in the U.S. and/or Canada: - Access to quality medical, dental, and vision insurance. - 401(k) plan with a matching contribution from Cisco. - Short and long-term disability coverage. - Basic life insurance and various wellbeing offerings. - Incentive opportunities based on revenue attainment, with no cap on incentive compensation for exceeding 100% attainment.,
Posted 6 days ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
Are you a seasoned DevSecOps & Cloud Integration expert with a passion for automation and Kubernetes We're hiring an Application Service Integration Subject Matter Expert (SME) to take on the role of leading service automation and secure integration strategy for migrating containerized applications and databases from on-prem Kubernetes to Azure Kubernetes Service (AKS). As the Application Service Integration SME, your key responsibilities will include leading the DevSecOps strategy and secure integration patterns across AKS, middleware, and databases. You will be tasked with building Infrastructure as Code (IaC) templates using tools like Terraform, Bicep, ARM templates, and Ansible. Additionally, you will design integrations for various technologies such as Tomcat, Kafka, Redis, RabbitMQ, PostgreSQL, and MongoDB. Furthermore, you will be responsible for developing secure REST API integrations with Azure Key Vault, Monitor, and API Management. Collaboration across Cloud, Middleware, and Database teams for dependency mapping will be crucial in this role. You will also need to integrate CI/CD workflows using Azure DevOps and GitHub Actions, as well as implement observability through Azure Monitor, Log Analytics, and Container Insights. Ensuring compliance with DevSecOps standards and providing L3 support will be part of your responsibilities. If you are ready to lead cloud-native transformations, build secure, automated, and scalable service integrations, we encourage you to apply for this exciting opportunity. Please note that the work hours for this position are based on Canada EST and immediate joiners are preferred. If you meet the experience level of 6-8 years and have a deep expertise in DevSecOps, Kubernetes, Terraform, Azure, and cloud integration, we want to hear from you! Apply now or send a direct message for referrals. Join us in shaping the future of Application Service Integration in a hybrid working environment based in India.,
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Technology Service Specialist, AVP at our Pune location, you will be an integral part of the Technology, Data, and Innovation (TDI) Private Bank team. In this role, you will be responsible for providing 2nd Level Application Support for business applications used in branches, by mobile sales, or via the internet. Your expertise in Incident Management and Problem Management will be crucial in ensuring the stability of these applications. Partnerdata, the central client reference data system in Germany, is a core banking system that integrates many banking processes and applications through numerous interfaces. With the recent migration to Google Cloud (GCP), you will be involved in operating and further developing applications and functionalities on the cloud platform. Your focus will also extend to regulatory topics surrounding partner/client relationships. We are seeking individuals who can contribute to this contemporary and emerging Cloud application area. Key Responsibilities: - Ensure optimum service level to supported business lines - Oversee resolution of incidents and problems within the team - Assist in managing business stakeholder relationships - Define and manage OLAs with relevant stakeholders - Monitor team performance, adherence to processes, and alignment with business SLAs - Manage escalations and work with relevant functions to resolve issues quickly - Identify areas for improvement and implement best practices in your area of expertise - Mentor and coach Production Management Analysts within the team - Fulfill Service Requests, communicate with Service Desk function, and participate in major incident calls - Document tasks, incidents, problems, changes, and knowledge bases - Improve monitoring of applications and implement automation of tasks Skills and Experience: - Service Operations Specialist experience in a global operations context - Extensive experience supporting complex application and infrastructure domains - Ability to manage and mentor Service Operations teams - Strong ITIL/best practice service context knowledge - Proficiency in interface technologies, communication protocols, and ITSM tools - Bachelor's Degree in IT or Computer Science related discipline - ITIL certification and experience with ITSM tool ServiceNow preferred - Knowledge of Banking domain and regulatory topics - Experience with databases like BigQuery and understanding of Big Data and GCP technologies - Proficiency in tools like GitHub, Terraform, Cloud SQL, Cloud Storage, Dataproc, Dataflow - Architectural skills for big data solutions and interface architecture Area-Specific Tasks/Responsibilities: - Handle Incident/Problem Management and Service Request Fulfilment - Analyze and resolve incidents escalated from 1st Level Support - Support the resolution of high-impact incidents and escalate when necessary - Provide solutions for open problems and support service transition for new projects/applications Joining our team, you will receive training, development opportunities, coaching from experts, and a culture of continuous learning to support your career progression. We value diversity and promote a positive, fair, and inclusive work environment at Deutsche Bank Group. Visit our company website for more information.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
punjab
On-site
As an Integration DevOps Engineer in Sydney, you will need to possess a skillset that includes expertise in Redhat Openshift Kubernetes Container Platform, Infrastructure as Code using Terraform, and various DevOps concepts, tools, and languages. With at least 3+ years of experience, you will be responsible for developing, configuring, and maintaining Openshift in lower environments and production settings. Your role will involve working with Automation and CI/CD tools such as Ansible, Jenkins, Tekton, or Bamboo pipelines in conjunction with Kubernetes containers. A strong understanding of security policies including BasicAuth, OAuth, WSSE token, and configuring security policies for APIs using Hashicorp Vault will be essential for this position. In addition, you will be expected to create environments, namespaces, virtual hosts, API proxies, and cache as well as work with APIGEEX, ISTIO ServiceMesh, and Confluent Kafka Setup and Deployments. Your experience with cloud architectures such as AWS, Azure, Private, OnPrem, and Multi-cloud will be valuable. Furthermore, you will play a key role in developing, managing, and supporting automation tools, processes, and runbooks. Your contribution to delivering services or features via an agile DevOps approach, ensuring information security for the cloud, and promoting good practices in coding and automation processes will be crucial. Effective communication skills are essential as you will be required to provide thought leadership on cloud platform, automation, coding patterns, and components. A self-rating matrix from the candidate on skills like OpenShift, Kubernetes, and APIGEEX is mandatory for consideration. If you have any questions or need further clarification, please feel free to reach out.,
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
Iamneo, a fast-growing B2B EdTech SaaS company, is seeking a Senior DevOps & Cloud Operations Engineer to take end-to-end ownership of cloud infrastructure and DevOps practices. This role is crucial in driving scalable, secure, and high-performance deployment environments for applications. If you have a passion for innovation and growth and are eager to redefine the future of tech learning, iamneo is the place for you. As the Senior DevOps & Cloud Operations Engineer, your responsibilities will include architecting, deploying, and managing scalable cloud infrastructure, leading infrastructure optimization initiatives, designing and implementing CI/CD pipelines, automating infrastructure provisioning and configuration, managing containerized environments, implementing configuration management, setting up monitoring and alerting systems, writing automation and operational scripts, ensuring security controls and compliance, conducting infrastructure audits, backups, and disaster recovery drills, troubleshooting and resolving infrastructure-related issues, collaborating with product and development teams, supporting platform transitions and cloud migration efforts, and mentoring junior engineers. The ideal candidate should have at least 5 years of hands-on experience in DevOps, cloud infrastructure, and system reliability, strong expertise in GCP and Azure, proficiency in CI/CD, infrastructure-as-code, and container orchestration, scripting skills in Bash, Python, or similar languages, a solid understanding of cloud-native and microservices architectures, strong problem-solving and communication skills, and an ownership mindset. Preferred qualifications include GCP and/or Azure certifications, experience with Agile and DevOps cultural practices, prior experience deploying web applications using Node.js, Python, or similar technologies, and the ability to thrive in fast-paced environments. If you are someone who enjoys working in a multi-cloud, automation-first environment and excels at building robust systems that scale, iamneo welcomes your application.,
Posted 6 days ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Lead/Engineer DevOps at Wabtec Corporation, you will play a crucial role in performing CI/CD and automation design/validation activities. Working under the project responsibility of the Technical Project Manager and the technical responsibility of the software architect, you will be a key member of the WITEC team in Bengaluru. Your main responsibilities will include respecting internal processes, adhering to coding rules, writing documentation in alignment with the implementation, and meeting Quality, Cost, and Time objectives set by the Technical Project Manager. To be successful in this role, you should hold a Bachelor's or Master's degree in engineering in Computer Science with a web option in CS, IT, or a related field. Additionally, you should have 6 to 10 years of hands-on experience as a DevOps Engineer. The ideal candidate will have a good understanding of Linux systems and networking, proficiency in CI/CD tools like GitLab, knowledge of containerization technologies such as Docker, and experience with scripting languages like Bash and Python. Hands-on experience in setting up CI/CD pipelines, configuring Virtual Machines, and familiarity with C/C++ build tools like CMake and Conan is essential. Moreover, expertise in setting up pipelines in GitLab for build, unit testing, and static analysis, along with knowledge of infrastructure as code tools like Terraform or Ansible, will be advantageous. Experience with monitoring and logging tools such as ELK Stack or Prometheus/Grafana is desirable. As a DevOps Engineer, you should possess strong problem-solving skills and the ability to troubleshoot production issues effectively. A passion for continuous learning, staying updated with modern technologies and trends in the DevOps field, and proficiency in project management and workflow tools like Jira, SPIRA, Teams Planner, and Polarion are key attributes for this role. In addition to technical skills, soft skills such as good communication in English, autonomy, interpersonal skills, synthesis skills, and the ability to work well in a team while managing multiple tasks efficiently are highly valued in this position. At Wabtec, we are committed to embracing diversity and inclusion, not just in our products but also in our people. We celebrate the variety of experiences, expertise, and backgrounds that our employees bring, creating an environment where everyone belongs and diversity is welcomed and appreciated. Join us in our mission to drive progress, unlock our customers" potential, and deliver innovative transportation solutions that move and improve the world.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
ahmedabad, gujarat
On-site
You have hands-on experience with OpenShift including installation, configuration, and scaling. You are able to design, deploy, and manage OpenShift Container Platform environments. You also possess the ability to design, deploy, and manage Openstack platforms environment. Your skills include developing automation playbooks for container orchestration using Ansible and Terraform. Troubleshooting containerized applications and platform-level issues is part of your expertise. Collaboration with DevOps and cloud teams for cloud-native application deployments is something you excel at. You are responsible for performing lifecycle management tasks such as upgrades, patching, and scaling. Ensuring platform security and compliance with industry standards is a priority for you. Additionally, you are familiar with monitoring and logging tools like ELK, Prometheus, and Grafana. Possessing Red Hat certifications such as EX280 (OpenShift Administrator) and RHCA would be a strong advantage for this role.,
Posted 6 days ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be a key member of a Cloud Engineering team dedicated to building and developing CI/CD and IaC automation framework tools, advancing Trimbles technologies, and providing our current and future customers with an intuitive product experience. We are looking for someone highly skilled, motivated, and collaborative. You should already have experience building responsive, customer-facing applications using some of the most recent technologies and frameworks, and you should be able to engage in multi-cloud discussions. You have an interest in staying abreast of constantly changing technologies. Finally, you have a quality-first mindset and are excited to roll up your sleeves for the next big challenge. We are seeking a skilled Azure DevOps Engineer to join our team. The ideal candidate will have experience in designing, implementing, and managing DevOps processes and tools in an Azure environment. You will work closely with development, operations, and quality assurance teams to streamline and automate the software development lifecycle. Key Responsibilities: - Develop and deploy CI/CD pipelines utilizing Azure DevOps. - Partner with development teams to guarantee smooth integration of code modifications. - Automate infrastructure setup and configuration through Azure Resource Manager (ARM) templates and Terraform. - Monitor and enhance application performance and stability. - Incorporate security best practices into the DevOps workflow. - Identify and resolve problems within the DevOps environment. - Maintain comprehensive documentation for DevOps procedures and tools. Skills required for this role: - 6-8 years of experience working with Azure. - Azure Certification. - Experience with microservices architecture. - Proficient in Terraform, Python, or another high-level programming language. - Experience with SaaS monitoring tools (e.g., Datadog, SumoLogic, PagerDuty, ELK, Grafana). - Experience with Atlassian tools (Bitbucket, Jira, Confluence), and GitHub. - Experience with SQL and NoSQL databases. - Experience with CI/CD tools such as Jenkins/Bamboo. - Experience with Kubernetes (a plus). - Extensive experience with Azure App Service, Azure Functions, and other Azure services. - Proven experience with Azure Front Door and designing multi-region architectures for high availability and disaster recovery. - Strong understanding of GitHub workflows for CI/CD pipeline implementation and management.,
Posted 6 days ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Cloud Engineer at JLLT CoE, you will be responsible for designing and implementing cloud infrastructure across Azure, AWS, and GCP environments. Your role will involve architecting, deploying, and optimizing Azure-based solutions, including compute, storage, networking, and security services. You will lead cloud migration and modernization initiatives with a focus on Azure technologies and maintain infrastructure as code using modern DevOps practices. Additionally, you will manage containerized workloads using AKS and EKS, establish cloud security standards, build CI/CD pipelines, provide mentorship to junior engineers, optimize cloud resource usage, and troubleshoot complex issues in cloud environments. To excel in this role, you should have a minimum of 8 years of experience in cloud engineering/architecture roles, with strong hands-on experience in Azure and working knowledge of AWS and GCP platforms. Proficiency in Kubernetes orchestration, particularly AKS, along with experience in Karpenter, ArgoCD, and Istio in container environments is essential. You should also demonstrate expertise in cloud security principles, cloud networking concepts, and have experience with GitHub Actions for CI/CD workflows. Familiarity with infrastructure as code tools like Terraform, Azure ARM templates, and AWS CloudFormations is required, along with excellent communication and collaboration skills. Preferred qualifications include Azure certifications, experience with Azure governance and compliance frameworks, knowledge of hybrid cloud architectures, background in supporting enterprise-scale applications, and experience with Azure monitoring and observability tools. We are seeking a proactive cloud professional who excels in Azure environments, maintains multi-cloud expertise, can work independently on complex problems, and deliver robust, secure cloud solutions. If you are passionate about cloud technologies and looking to join a dynamic team, apply today to be a part of our exciting journey at JLLT CoE.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Cloud Engineer - Cloud Database Migration Specialist, you will play a crucial role in leading the migration of on-premises databases to Azure while focusing on optimizing cloud infrastructure and ensuring seamless transitions. Your expertise in Azure cloud migration, and experience in Oracle, PostgreSQL, and MS SQL Server database administration will be instrumental in successfully executing database migrations to Azure. Your key responsibilities will include planning and executing migrations of various databases to Azure, utilizing cloud native services/tools and open-source tools, as well as developing custom tools for large-scale data migrations. You will design and implement database migrations with minimal downtime, ensuring data integrity checks are conducted effectively. Additionally, you will be responsible for performing tasks such as database performance tuning, backups, and recovery, while collaborating with cross-functional teams to troubleshoot and resolve migration issues. To excel in this role, you should possess at least 3 years of Azure cloud migration experience, along with proficiency in Oracle, Postgres, and MS SQL Server DBA tasks. A strong understanding of database architecture and automation tools is essential, and expertise in terraform and Azure certification is preferred. If you are a detail-oriented Cloud Engineer with a passion for database migration and a knack for optimizing cloud infrastructure, we encourage you to apply for this exciting opportunity to contribute to our team's success.,
Posted 6 days ago
1.0 - 5.0 years
0 Lacs
chennai, tamil nadu
On-site
You are a Systems Engineer with 1 to 4 years of experience looking to join a team that operates in a hybrid work model. Your expertise lies in Cisco Catalyst SDWAN, Azure Networking Services, and other networking technologies, with a strong focus on optimizing and managing network systems in the food services domain to enhance operational efficiency. Your responsibilities will include designing and implementing network solutions using Cisco Catalyst SDWAN for seamless connectivity, managing and optimizing Azure Networking Services for cloud infrastructure needs, configuring and maintaining Cisco ISE for network security and compliance, utilizing Terraform and Ansible for network automation, overseeing AWS Networking for reliable cloud services, implementing wireless networking solutions using Cisco technologies for mobile and remote access, configuring and troubleshooting Cisco Switching & Routing for network stability and performance, leveraging Google Cloud Networking for multi-cloud strategies, using WireShark for network analysis and troubleshooting, implementing Fortinet SDWAN solutions for network traffic optimization, collaborating with cross-functional teams to align network strategies with business objectives, providing technical support and guidance to ensure operational requirements are met, and monitoring network performance for improvements in efficiency and reliability. To excel in this role, you should possess strong technical skills in Cisco Catalyst SDWAN, Azure Networking Services, and Cisco ISE, demonstrate proficiency in Terraform and Ansible for network automation, have experience with AWS Networking and Wireless Networking using Cisco, show expertise in Switching & Routing Cisco and Google Cloud Networking, be skilled in using WireShark for network analysis, have knowledge of Fortinet SDWAN solutions, and exhibit an understanding of the food services domain and its specific networking needs.,
Posted 6 days ago
2.0 - 6.0 years
0 Lacs
ghaziabad, uttar pradesh
On-site
At RightCrowd, we are revolutionizing physical access control with SmartAccess, a next-generation platform that redefines how people interact with security systems. We are transforming an outdated industry into a seamless, futuristic experience. Imagine doors opening effortlessly, just like in Star Trek! Our innovative platform powers cutting-edge solutions that enhance the daily experiences of employees, visitors, and users. Trusted by some of the world's largest organizations, including top tech companies, our products are making a global impact. We are not looking for the perfect candidate with a flawless resume. Instead, we value curiosity, a willingness to learn, and a commitment to making a difference. If you are excited about tackling challenges, growing your skills, and contributing to innovative solutions, we'd love to hear from you, even if you do not meet every single requirement. To enhance existing features and develop new, groundbreaking solutions, we are looking for a passionate Full Stack Software Engineer to join our remote team. Our team has its roots in a Belgian startup, and we still carry the startup spirit within us. We strive to maintain a small team size and minimize corporate overhead. In essence, we offer a high-responsibility, high-expectation environment with cutting-edge technology, free from unnecessary rules and constraints. **Key Responsibilities:** - Develop and maintain our web interfaces. - Review and give feedback on use cases, UI and UX design. - Contribute to the development of our backend services. - Support and evolve our cloud-native platform and infrastructure. - Perform development testing to ensure high-quality deliverables. - Assist in requirements gathering & architectural decision-making and provide feedback to shape the product roadmap. - Create and maintain documentation while continuously sharing knowledge with the team and the broader company. - Assist in third-line support and handle customer support requests when needed. - Eager learner. We don't expect anyone to already know everything. **Requirements:** - Fluency in English, clear communicator - A commitment to lifelong learning - Proven 2-4 years of experience in software development within complex environments - Strong knowledge and experience in: - NodeJS and related frameworks - TypeScript - React - Unix systems and networking - Containerization, Docker - Excellent debugging and problem-solving skills - Analytical, intelligent and well-organized - Flexible, hands-on and comfortable in a fast-paced environment *Bonus points if you have experience with any of the following:* - Terraform - Containerization, Docker - Unix systems and networking - Good understanding of and experience with Kubernetes & GitOps - Observability (metrics, logs, and tracing) **Why Join Us ** - Be part of a company that is a leader in the safety, security, and compliance solution space. - Opportunity to work on innovative products that have a real impact on safety and security. - Collaborative and supportive work environment with opportunities for professional growth and development. - Competitive salary and benefits package. Ready to make an impact Apply now to join our team!,
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France