Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Senior Data Engineer at our Bangalore office, you will play a crucial role in developing data pipeline solutions to meet business data needs. Your responsibilities will involve designing, implementing, and maintaining structured and semi-structured data models, utilizing Python and SQL for data collection, enrichment, and cleansing. Additionally, you will create data APIs in Python Flask containers, leverage AI for analytics, and build data visualizations and dashboards using Tableau. Your expertise in infrastructure as code (Terraform) and executing automated deployment processes will be vital for optimizing solutions for costs and performance. You will collaborate with business analysts to gather stakeholder requirements and translate them into detailed technical specifications. Furthermore, you will be expected to stay updated on the latest technical advancements, particularly in the field of GenAI, and recommend changes based on the evolving landscape of Data Engineering and AI. Your ability to embrace change, share knowledge with team members, and continuously learn will be essential for success in this role. To qualify for this position, you should have at least 5 years of experience in data engineering, with a focus on Python programming, data pipeline development, and API design. Proficiency in SQL, hands-on experience with Docker, and familiarity with various relational and NoSQL databases are required. Strong knowledge of data warehousing concepts, ETL processes, and data modeling techniques is crucial, along with excellent problem-solving skills and attention to detail. Experience with cloud-based data storage and processing platforms like AWS, GCP, or Azure is preferred. Bonus skills such as being a GenAI prompt engineer, proficiency in Machine Learning technologies like TensorFlow or PyTorch, knowledge of big data technologies, and experience with data visualization tools like Tableau, Power BI, or Looker will be advantageous. Familiarity with Pandas, Spacy, NLP libraries, agile development methodologies, and optimizing data pipelines for costs and performance are also desirable. Effective communication and collaboration skills in English are essential for interacting with technical and non-technical stakeholders. You should be able to translate complex ideas into simple examples to ensure clear understanding among team members. A bachelor's degree in computer science, IT, engineering, or a related field is required, along with relevant certifications in BI, AI, data engineering, or data visualization tools. The role will be based at The Leela Office on Airport Road, Kodihalli, Bangalore, with a hybrid work schedule allowing you to work from the office on Tuesdays, Wednesdays, Thursdays, and from home on Mondays and Fridays. If you are passionate about turning complex data into valuable insights and have experience in mentoring junior members and collaborating with peers, we encourage you to apply for this exciting opportunity.,
Posted 1 week ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About The Role We are looking for a passionate and skilled Full Stack Developer with strong experience in React.js , Node.js , and AWS Lambda to build a custom enterprise platform that interfaces with a suite of SDLC tools. This platform will streamline tool administration, automate provisioning and deprovisioning of access, manage licenses, and offer centralized dashboards for governance and monitoring. Required Skills & Qualifications 4–6 years of hands-on experience as a Full Stack Developer Proficient in React.js and component-based front-end architecture Strong backend experience with Node.js and RESTful API development Solid experience with AWS Lambda, API Gateway, DynamoDB, S3, etc. Prior experience integrating and automating workflows for SDLC tools like: JIRA, Jenkins, GitLab, Bitbucket, GitHub, SonarQube, etc. Understanding of OAuth2, SSO, and API key-based authentications Familiarity with CI/CD pipelines, microservices, and event-driven architectures Strong knowledge of Git and modern development practices Good problem-solving skills, and ability to work independently Nice To Have Experience with Infrastructure-as-Code (e.g., Terraform, CloudFormation) Experience with AWS EventBridge, Step Functions, or other serverless orchestration tools Knowledge of enterprise-grade authentication (LDAP, SAML, Okta) Familiarity with monitoring/logging tools like CloudWatch, ELK, or DataDog
Posted 1 week ago
1.0 years
0 Lacs
Pune, Maharashtra
On-site
COMPANY OVERVIEW Domo's AI and Data Products Platform lets people channel AI and data into innovative uses that deliver a measurable impact. Anyone can use Domo to prepare, analyze, visualize, automate, and build data products that are amplified by AI. POSITION SUMMARY As a DevOps Engineer in Pune, India at Domo, you will play a crucial role in designing, implementing, and maintaining scalable and reliable infrastructure to support our data-driven platform. You will collaborate closely with engineering, product, and operations teams to streamline deployment pipelines, improve system reliability, and optimize cloud environments. If you thrive in a fast-paced environment and have a passion for automation, optimization, and software development, we want to hear from you! KEY RESPONSIBILITIES Design, build, and maintain scalable infrastructure using cloud platforms (AWS, GCP, or Azure) Develop and manage CI/CD pipelines to enable rapid and reliable deployments Automate provisioning, configuration, and management of infrastructure using tools like Terraform, Ansible, Salt, or similar Develop and maintain tooling to automate, facilitate, and monitor operational tasks Monitor system health and performance, troubleshoot issues, and implement proactive solutions Collaborate with software engineers to improve service scalability, availability, and security Lead incident response and post-mortem analysis to ensure service reliability Drive DevOps best practices and continuous improvement initiatives across teams JOB REQUIREMENTS 3+ years of experience in DevOps, Site Reliability Engineering, or infrastructure engineering roles 1+ years working in a SaaS environment Bachelor’s degree in Computer Science, Software Engineering, Information Technology, or a related field Expertise in cloud platforms such as AWS, GCP, or Azure. Certifications preferred. Strong experience with infrastructure as code (Terraform, CloudFormation, etc.) Proficiency in automation and configuration management tools such as Ansible and Salt Hands-on experience with containerization (Docker) and orchestration (Kubernetes) Solid understanding of CI/CD tools (Jenkins, GitHub Actions, etc.) and processes Strong scripting skills in Python, Bash, or similar languages Experience developing applications or tools using Java, Python, or similar programming languages Familiarity with Linux system administration and troubleshooting Experience with version control systems, particularly GitHub Experience with monitoring and logging tools (Prometheus, Grafana, ELK stack, Datadog) Knowledge of networking, security best practices, and cost optimization on cloud platforms Excellent communication and collaboration skills LOCATION: Pune, Maharashtra, India INDIA BENEFITS & PERKS Medical insurance provided Maternity and paternity leave policies Baby bucks: a cash allowance to spend on anything for every newborn or child adopted “Haute Mama”: cash allowance for maternity wardrobe benefit (only for women employees) Annual leave of 18 days + 10 holidays + 12 sick leaves Sodexo Meal Pass Health and Wellness Benefit One-time Technology Benefit: cash allowance towards the purchase of a tablet or smartwatch Corporate National Pension Scheme Employee Assistance Programme (EAP) Marriage leaves up to 3 days Bereavement leaves up to 5 days Domo is an equal opportunity employer. #LI-PD1 #LI-Hybrid
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
Pune, Maharashtra
On-site
IT-ISPune Posted On 31 Jul 2025 End Date 31 Dec 2025 Required Experience 8 - 12 Years Basic Section Grade Role Senior Automation Engineer Employment Type Full Time Employee Category Organisational Group Company NewVision Company Name New Vision Softcom & Consultancy Pvt. Ltd Function Business Units (BU) Department/Practice IT-IS Organization Unit IT-IS Region APAC Country India Base Office Location Pune Working Model Hybrid Weekly Off Pune Office Standard State Maharashtra Skills Skill AUTOMATION POWERSHELL SCRIPTING TERRAFORM MS AZURE & M 365 OFFICE 365 - EXCHANGE ONLINE TEAMS Highest Education GRADUATION/EQUIVALENT COURSE CERTIFICATION AZ-104: MICROSOFT AZURE ADMINISTRATOR MICROSOFT CERTIFIED: AZURE – FUNDAMENTALS EXAM AZ-900 Working Language ENGLISH Job Description Job Summary: We are seeking a highly experienced and forward-thinking Senior Microsoft Automation Engineer to lead the design, development, and implementation of automation solutions across Microsoft 365 and Azure environments. This role requires deep technical expertise in PowerShell , KQL , Terraform , Azure Functions , and Microsoft-native automation tools such as Power Automate , Azure Automation , and Logic Apps . The ideal candidate will also possess basic knowledge of agentic AI systems , enabling them to explore intelligent automation strategies that enhance operational efficiency and decision-making. Key Responsibilities: Automation Strategy & Development Architect and implement scalable automation solutions across Microsoft 365 (Entra ID, Exchange Online, SharePoint Online, Teams) and Azure services. Develop and maintain advanced PowerShell scripts for administrative tasks, provisioning, and lifecycle automation. Design and deploy Infrastructure as Code (IaC) using Terraform for consistent Azure resource management. Build and manage workflows using Power Automate , Azure Logic Apps , Azure Automation Runbooks , and Azure Functions for event-driven automation. Explore and prototype intelligent automation using agentic AI principles , such as autonomous task execution and decision-making agents. Monitoring, Reporting & Optimization Use Kusto Query Language (KQL) to create queries and dashboards in Azure Monitor , Log Analytics , and Microsoft Sentinel . Implement automated monitoring and alerting systems to ensure service health, compliance, and performance. Analyze cloud usage and implement automation strategies for cost savings , resource optimization , and governance enforcement . Employee Lifecycle Automation Automate onboarding and offboarding processes including account creation, license assignment, mailbox setup, and access provisioning. Integrate with Entra ID for identity lifecycle management and ensure secure transitions. Maintain audit trails and compliance documentation for all lifecycle automation processes. Cloud Migrations & Assessments Lead or support cloud migration projects from on-premises to Microsoft 365 and Azure. Conduct cloud readiness assessments , identify automation opportunities, and develop migration strategies. Collaborate with stakeholders to ensure seamless transitions and high adoption rates. Collaboration & Governance Partner with IT, HR, security, and business teams to gather requirements and deliver automation solutions. Establish governance frameworks for automation workflows, including version control, documentation, and change management. Ensure compliance with data protection, access control, and audit requirements. Required Skills & Qualifications: Technical Expertise Deep experience with Microsoft 365 services : Entra ID, Exchange Online, SharePoint Online, Microsoft Teams. Advanced proficiency in PowerShell scripting . Hands-on experience with Power Automate , Azure Automation , Logic Apps , and Azure Functions . Strong command of KQL for telemetry and log analysis. Proficiency in Terraform for infrastructure provisioning. Familiarity with Microsoft Graph API , REST APIs, and service integrations. Basic understanding of agentic AI concepts , such as autonomous agents, task planning, and intelligent orchestration. Project Experience Proven experience in automating employee lifecycle , service provisioning , and compliance workflows . Hands-on involvement in cloud migrations , brownfield modernization , and greenfield deployments . Experience in cost management , governance , and security best practices in Azure. Soft Skills Strong analytical and problem-solving abilities. Excellent communication and stakeholder engagement skills. Ability to lead cross-functional teams and mentor junior engineers. Commitment to continuous learning and staying updated with Microsoft and AI technologies. Preferred Qualifications: Microsoft Certified: Azure Administrator Associate or equivalent Microsoft Certified: Power Platform Developer Associate Experience with hybrid environments and third-party automation tools Familiarity with DevOps practices and CI/CD pipelines Exposure to AI/ML tools and platforms (e.g., Azure AI, OpenAI APIs)
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
As a Principal Engineer at Walmart's Enterprise Business Services, you will play a pivotal role in shaping the engineering direction, driving architectural decisions, and ensuring the delivery of scalable, secure, and high-performing solutions across the platform. Your responsibilities will include leading the design and development of full stack applications, architecting complex cloud-native systems on Google Cloud Platform (GCP), defining best practices, and guiding engineering excellence. You will have the opportunity to work on crafting frontend experiences, building robust backend APIs, designing cloud infrastructure, and influencing the technical vision of the organization. Collaboration with product, design, and data teams to translate business requirements into scalable tech solutions will be a key aspect of your role. Additionally, you will champion CI/CD pipelines, Infrastructure as Code (IaC), and drive code quality through rigorous design reviews and automated testing. To be successful in this role, you are expected to bring 10+ years of experience in full stack development, with at least 2+ years in a technical leadership or principal engineering role. Proficiency in JavaScript/TypeScript, Python, or Go, along with expertise in modern frontend frameworks like React, is essential. Strong experience in cloud-native systems on GCP, microservices architecture, Docker, Kubernetes, and event-driven systems is required. Your role will also involve managing production-grade cloud systems, working with SQL and NoSQL databases, and staying ahead of industry trends by evaluating new tools and frameworks. Exceptional communication, leadership, and collaboration skills are crucial, along with a GCP Professional Certification and experience with serverless platforms and observability tools. Joining Walmart Global Tech means being part of a team that makes a significant impact on millions of people's lives through innovative technology solutions. You will have the opportunity to work in a flexible, hybrid environment that promotes collaboration and personal development. In addition to a competitive compensation package, Walmart offers various benefits and a culture that values diversity, inclusion, and belonging for all associates. As an Equal Opportunity Employer, Walmart fosters a workplace where unique styles, experiences, and identities are respected and valued, creating a welcoming environment for all.,
Posted 1 week ago
0.0 - 4.0 years
0 Lacs
delhi
On-site
As a DevOps Intern at LiaPlus AI, you will play a crucial role in building, automating, and securing our AI-driven infrastructure. You will be working closely with our engineering team to optimize cloud operations, enhance security & compliance, and streamline deployments using DevOps & MLOps best practices. Your primary responsibilities will include infrastructure management by deploying and managing cloud resources on Azure as the primary platform. You will also be responsible for setting up robust CI/CD pipelines for seamless deployments, ensuring systems align with ISO, GDPR, and SOC 2 compliance for security & compliance, setting up monitoring dashboards and logging mechanisms for system observability, managing and optimizing PostgreSQL, MongoDB, Redis for performance, writing automation scripts using Terraform, Ansible, and Bash for automation & scripting, managing API gateways like Kong and Istio for network & API gateway management, implementing failover strategies for disaster recovery & HA to ensure system reliability, and deploying and monitoring AI models using Kubernetes and Docker for AI model deployment & MLOps. Requirements: - Currently pursuing or recently completed a Bachelor's/Master's in Computer Science, IT, or a related field. - Hands-on experience with Azure, CI/CD tools, and scripting languages (Python, Bash). - Understanding of security best practices and cloud compliance (ISO, GDPR, SOC 2). - Knowledge of database optimization techniques (PostgreSQL, MongoDB, Redis). - Familiarity with containerization, orchestration, and AI model deployment (Docker, Kubernetes). - Passion for automation, DevOps, and cloud infrastructure. Benefits: - Competitive compensation package. - Opportunity to work with cutting-edge technologies in AI-driven infrastructure. - Hands-on experience in a fast-paced and collaborative environment. - Potential for growth and learning opportunities in the field of DevOps and MLOps. Join us at LiaPlus AI to be part of a dynamic team that is reshaping the future of AI infrastructure through innovative DevOps practices.,
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. As an Infrastructure Engineer, you will be responsible for the technical design, planning, implementation, and optimization of performance tuning and recovery procedures for critical enterprise systems and applications. You will serve as the technical authority in system administration for complex SaaS, local, and cloud-based environments. Your role is critical in ensuring the high availability, reliability, and scalability of our infrastructure components. You will also be involved in designing philosophies, tools, and processes to enable the rapid delivery of evolving products. In This Role You Will Design, configure, and document cloud-based infrastructures using AWS Virtual Private Cloud (VPC) and EC2 instances in AWS. Secure and monitor hosted production SaaS environments provided by third-party partners. Define, document, and manage network configurations within AWS VPCs and between VPCs and data center networks, including firewall, DNS, and ACL configurations. Lead the design and review of developer work on DevOps tools and practices. Ensure high availability and reliability of infrastructure components through monitoring and performance tuning. Implement and maintain security measures to protect infrastructure from threats. Collaborate with cross-functional teams to design and deploy scalable solutions. Automate repetitive tasks and improve processes using scripting languages such as Python, PowerShell, or BASH. Support Airflow DAGs in the Data Lake, utilizing the Spark framework and Big Data technologies. Provide support for infrastructure-related issues and conduct root cause analysis. Develop and maintain documentation for infrastructure configurations and procedures. Administer databases, handle data backups, monitor databases, and manage data rotation. Work with RDBMS and NoSQL systems, leading stateful data migration between different data systems. Experience & Qualifications Bachelor’s or Master’s degree in Information Science, Computer Science, Business, or equivalent work experience. 3-5 years of experience with Amazon Web Services, particularly VPC, S3, EC2, and EMR. Experience in setting up new VPCs and integrating them with existing networks is highly desirable. Experience in maintaining infrastructure for Data Lake/Big Data systems built on the Spark framework and Hadoop technologies. Experience with Active Directory and LDAP setup, maintenance, and policies. Workday certification is preferred but not required. Exposure to Workday Integrations and Configuration is preferred. Strong knowledge of networking concepts and technologies. Experience with infrastructure automation tools (e.g., Terraform, Ansible, Chef). Familiarity with containerization technologies like Docker and Kubernetes. Excellent problem-solving skills and attention to detail. Strong verbal and written communication skills. Understanding of Agile project methodologies, including Scrum and Kanban, is required. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
The goal of Digital Workplace (DW) is to make a significant and positive impact on the way employees work, collaborate, and share information at Airbus, paving the way for the digital transformation of the company. This involves deploying and supporting new tools and technologies to enhance cross-functional collaboration, productivity, company integration, transparency, and information exchange. A key factor in achieving success for Digital Workplace is a well-defined strategy, effective transformation plan, and operational controls. At Devices & Services PSL level (DWD), the focus is on providing modern endpoint services and access to Airbus business applications for employees regardless of their location. To lead the development, standardization, and lifecycle management of global Linux/Unix products & services, we are seeking a skilled and strategic Technical Lead for Linux & Unix Administration. This role plays a critical part in aligning technical capabilities with business requirements, driving efficiency, and ensuring service excellence for our user base. The Technical Lead will serve as the interface between infrastructure teams, operations, and end-user stakeholders, owning the technical product vision, roadmap, and continuous improvement of Unix/Linux-based services. Qualification & Experience: - Bachelor/Master Degree in Computer Science, Computer Engineering, Information Technology, or a relevant field - 7-10 years of experience in Linux and Unix Primary Responsibilities: - Own the technical product lifecycle for Linux/Unix administration services - Enhance security & monitoring of existing and future Linux/Unix systems - Support decommissioning of outdated systems and implement modern solutions - Strengthen the integration of solutions across applications, systems, and platforms - Define and maintain the technical product vision, strategy, and roadmap - Collaborate with SMEs, security, and regional teams to ensure services meet standards - Manage vendor relationships, licensing, and budget for Linux/Unix tools - Champion automation, self-service, and modernization within the Linux/Unix technical product space IT Service Management & Strategic Responsibilities: - Good knowledge of ITIL and experience with ITSM frameworks - Strong understanding of Linux/Unix systems from a product or operational standpoint - Experience in working in a large enterprise or global environment - Demonstrated ability to manage complex roadmaps and lead cross-functional teams - Familiarity with ITIL, Agile/Scrum, and product lifecycle management tools - Strong communication and stakeholder engagement skills - Capable of managing Partners, Suppliers, and Subcontractors Good to Have: - Hands-on experience with automation tools - Exposure to security and compliance frameworks - Experience with cloud-based Linux workloads - Previous work in a hybrid infrastructure environment - Comprehensive knowledge to manage macOS environments effectively Are you ready to take on this exciting challenge and be part of our innovative team at Airbus ,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
udaipur, rajasthan
On-site
We are looking for a skilled DevOps Engineer to join our technology team and play a key role in automating and enhancing development operations. Your responsibilities will include designing and implementing CI/CD pipelines to facilitate quick development and deployment, automating infrastructure provisioning using tools such as Terraform, CloudFormation, or Ansible, monitoring system performance, managing cloud infrastructure on AWS/Azure/GCP, and ensuring system reliability and scalability by implementing containerization and orchestration using Docker and Kubernetes. You will collaborate with various teams including developers, QA, and security to streamline software delivery, maintain configuration management and version control using Git, and ensure system security through monitoring, patch management, and vulnerability scans. Additionally, you will be responsible for assisting with system backups, disaster recovery plans, and rollback strategies. The ideal candidate should have a strong background in CI/CD tools like Jenkins, GitLab CI, or CircleCI, proficiency in cloud services (AWS, Azure, or GCP), experience in Linux system administration and scripting (Bash, Python), and hands-on experience with Docker and Kubernetes in production environments. Familiarity with monitoring/logging tools such as Prometheus, Grafana, ELK, and CloudWatch, as well as good knowledge of networking, DNS, load balancers, and firewalls are essential. Preferred qualifications for this role include a Bachelor's degree in Computer Science, IT, or a related field, DevOps certifications (e.g., AWS Certified DevOps Engineer, CKA/CKAD, Terraform Associate), experience in MLOps, Serverless architectures, or microservices, and knowledge of security practices in cloud and DevOps environments. If you are passionate about DevOps and have the required skills and experience, we would like to hear from you.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
ghaziabad, uttar pradesh
On-site
As a Pipeline Engineer, you will be responsible for designing and implementing scalable CI/CD pipelines using tools such as Jenkins, GitLab CI, or GitHub Actions. Your role will involve setting up infrastructure automation using IaC tools like Terraform and Ansible to provision and manage resources efficiently. You will also focus on integrating automated builds, testing, and static code analysis to maintain code quality and streamline the development process. Managing different environments (dev/staging/production) and ensuring the security of secrets will be part of your daily tasks. Utilizing containerization technologies like Docker and Kubernetes for deployments will be essential. Additionally, implementing monitoring and logging tools such as Prometheus, Grafana, and ELK for tracking performance and debugging issues will be crucial for the success of the projects. Security will be a top priority as you enforce best practices and compliance checks in the pipeline to safeguard the infrastructure and data. Collaborating with teams, documenting processes, and providing training to ensure smooth operations and effective communication will be key to your success. Continuous optimization of the pipeline performance and automating incident management processes to ensure reliability and quick resolution of issues will be part of your responsibilities. Your dedication to enhancing the pipeline efficiency and reliability will play a significant role in the overall success of the projects.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
vadodara, gujarat
On-site
You are looking for a highly skilled Cloud Network and Security Engineer (L4) with expertise in cloud infrastructure, network security, and hands-on experience with Palo Alto firewalls. In this role, you will be responsible for designing, implementing, and maintaining secure cloud and hybrid network environments to align with business objectives and ensure compliance with security standards. As a Cloud Network and Security Engineer, your key responsibilities will include designing, implementing, and managing cloud network security architectures in AWS, Azure, or GCP. You will configure, maintain, and troubleshoot Palo Alto firewalls and security appliances, develop and enforce security policies, conduct security assessments and risk analysis, manage VPNs, routing, and integrate firewall solutions with monitoring tools. Collaboration with DevOps and Cloud teams to ensure secure deployment pipelines and implementing Zero Trust Network Access strategies are also part of your role. Additionally, you will create and maintain network documentation, participate in incident response, and post-mortem reporting. To qualify for this role, you should have at least 6 years of experience in network and security engineering roles, strong hands-on experience with Palo Alto Networks, and familiarity with cloud-native networking and security services. A deep understanding of TCP/IP, routing, switching, DNS, DHCP, and cloud security frameworks is required. Knowledge of automation/scripting (Python, Terraform, Ansible) and professional certifications like PCNSE, CCNP Security, or AWS/Azure Security are preferred. The ideal candidate will possess strong analytical and problem-solving skills, excellent communication and documentation abilities, and the ability to work independently and collaboratively in a global team environment. Preferred certifications include PCNSE, AWS Certified Security - Specialty, Azure Security Engineer Associate, CISSP, or CCSP. Wipro is looking for individuals who are inspired by reinvention and are willing to evolve constantly. Join a business that values purpose and empowers you to design your own reinvention. Realize your ambitions at Wipro, where applications from people with disabilities are explicitly welcome.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chandigarh
On-site
As a Telecom Developer specializing in NodeJS, Asterisk, and SIP, you will be responsible for deploying telecom applications on private and public cloud platforms like Red Hat OpenStack, OpenShift, AWS, Azure, and GCP. Your role will involve the installation, acceptance, and performance management of 5G and O-RAN applications. Additionally, you will work on code pipeline and DevOps tools such as Jenkins, Git, GitHub, Bitbucket, Terraform, Azure DevOps, Kubernetes, and AWS DevOps. Your main responsibilities will include developing and implementing telecom solutions using Asterisk and SIP protocol to ensure high performance and scalability. You will utilize NodeJS for backend development, building robust services and APIs that integrate with telecom systems for seamless communication. Collaborating closely with architects and developers, you will translate requirements into technical solutions, participate in design and code reviews, and focus on optimizing and scaling applications to handle growing traffic and user demands. Integration with CRM systems like Salesforce, Zoho, and Leads Square will also be a key aspect of your role to ensure seamless data flow and synchronization. To excel in this role, you should have proven experience in developing telecom applications, a strong proficiency in NodeJS for backend development, and a deep understanding of telecom protocols, especially SIP, and Asterisk PBX. Familiarity with CRM integration, problem-solving skills, and the ability to work effectively in a collaborative environment are essential. A degree in Computer Science, Telecommunications, or a related field is preferred, along with at least 5 years of relevant work experience. In addition to technical skills, you should possess behavioral competencies such as attention to detail, customer engagement, proactive self-management, innovation, creativity, and adaptability. This full-time, permanent position requires in-person work and seeks candidates who can demonstrate hands-on experience with Asterisk, NodeJS, and SIP. A Bachelor's degree is preferred, and additional skills in VoIP, WebRTC, and cloud-based telephony solutions would be advantageous.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Software Engineer (Cloud Development) at our company, you will have the opportunity to be a key part of the Cloud development group in Bangalore. We are seeking passionate individuals who are enthusiastic problem solvers and experienced Cloud engineers to help us build and maintain Synamedia GO product and Infinite suite of solutions. Your role will involve designing, developing, and deploying solutions using your deep-rooted programming and system experience for the next generation of products in the domain of Video streaming. Your key responsibilities will include conducting technology evaluations, developing proof of concepts, designing Cloud distributed microservices features, writing code, conducting code reviews, continuous integration, continuous deployment, and automated testing. You will work as part of a development team responsible for building and managing microservices for the platform. Additionally, you will play a critical role in the design and development of services, overseeing the work of junior team members, collaborating in a multi-site team environment, and ensuring the success of your team by delivering high-quality results in a timely manner. To be successful in this role, you should have a strong technical background with experience in cloud design, development, deployment, and high-scale systems. You should be proficient in loosely coupled design, Microservices development, Message queues, and containerized applications deployment. Hands-on experience with technologies such as NodeJS, Java, GoLang, and Cloud Technologies like AWS, EKS, Open stack is required. You should also have experience in DevOps, CI/CD pipeline, monitoring tools, and database technologies. We are looking for highly motivated individuals who are self-starters, independent, have excellent analytical and logical skills, and possess strong communication abilities. You should have a Test-Driven Development (TDD) mindset, be open to supporting incidents on Production deployments, and be willing to work outside of regular business hours when necessary. At our company, we value diversity, inclusivity, and equal opportunity. We offer flexible working arrangements, skill enhancement and growth opportunities, health and wellbeing programs, and the chance to work collaboratively with a global team. We are committed to fostering a people-friendly environment, where all our colleagues can thrive and succeed. If you are someone who is eager to learn, ask challenging questions, and contribute to the transformation of the future of video, we welcome you to join our team. We offer a culture of belonging, where innovation is encouraged, and we work together to achieve success. If you are interested in this role or have any questions, please reach out to our recruitment team for assistance.,
Posted 1 week ago
0 years
0 Lacs
India
Remote
Lead the team building Zeller's cutting-edge mobile applications Join Zeller’s growing engineering team of 150 as a React Native focussed Tech Lead combining hands-on technical contribution with people leadership. You’ll manage a remote team of three engineers while taking ownership of the systems behind Zeller’s Mobile App and Point of Sale software. Leading your team from the front and staying deeply technical as a player-coach, this role offers the opportunity for high impact technical leadership with formal people management responsibilities. About The Team Zeller’s Mobile team is responsible for developing and scaling the customer-facing mobile applications that empower Australian businesses (soon international). Your new team works with product managers, designers, backend engineers, and others to build and enhance the Zeller Mobile App and Zeller Point of Sale products, encompassing a wide range of functionality including mobile banking and payment solutions. What You'll Do: Technical Leadership & System Ownership Drive architectural solutions collaboratively with your team and engineering leadership, to build and scale Zeller’s mobile products Write production code regularly as a core contributor while maintaining oversight of your team's technical deliverables Own operational excellence for your team’s systems including monitoring, incident response, and reliability improvements Champion modern development practices including active use of LLMs within your team and the adoption of technical practices at Zeller Lead a team of 3 software engineers, identifying skill gaps, managing personal development plans and fostering a positive culture of learning, ownership, and psychological safety Partner closely with product managers as joint owners of your product success, contributing technical and delivery insight to shape what gets built What We're Looking For You have owned solution design for complex features within mobile apps before and have worked with applications used at scale, or which were mission critical to their customer You have worked in product-driven companies and have a track record of driving pace and deliveryYou are experienced in mobile system design and demonstrate well reasoned, rational decision making with the ability to navigate between detail and broader context You have exceptional written and verbal communication and can move from a conversation with non-technical stakeholders to reframing a project into detailed designs requirements for your engineering team You have a strong self-learning drive and are actively using and adapting your workflows to incorporate generative AI tools Prior experience with React Native is highly valued, and experience with payment or mobile banking solutions is a plus. Technical Background You Should Be Comfortable Writing Production React Native Code, But Experience With Our Full Tech-stack Is Not An Expectation. Some Tools We Use React Native for cross-platform mobile development, bridging to native code to deliver features like Apple/Google wallet integration A modern graphql based api which drives an optimised frontend datastore capability Extensive application monitoring tools such as Sentry, Datadog, Segment A backend built on a serverless, event driven system architecture running on AWS. A rigorous approach to continuous integration and system correctness that includes infrastructure as code (CDK, Terraform), and high coverage of unit, integration, and system tests built with Github Actions
Posted 1 week ago
12.0 - 16.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Principal Site Reliability Engineer, you will be responsible for leading all infrastructure aspects of a new cloud-native, microservice-based security platform. This platform is fully multi-tenant, operates on Kubernetes, and utilizes the latest cloud-native CNCF technologies such as Istio, Envoy, NATS, Fluent, Jaeger, and Prometheus. Your role will involve technically leading an SRE team to ensure high-quality SLA for a global solution running in multiple regions. Your responsibilities will include building tools and frameworks to enhance developer efficiency on the platform and abstracting infrastructure complexities. Automation and utilities will be developed to streamline service operation and monitoring. The platform handles large amounts of machine-generated data daily and is designed to manage terabytes of data from numerous customers. You will actively participate in platform design discussions with development teams, providing infrastructure insights and managing technology and business tradeoffs. Collaboration with global engineering teams will be crucial as you contribute to shaping the future of Cybersecurity. At GlobalLogic, we prioritize a culture of caring, where people come first. You will experience an inclusive environment promoting acceptance, belonging, and meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Continuous learning and development are essential at GlobalLogic. You will have access to numerous opportunities to expand your skills, advance your career, and grow personally and professionally. Our commitment to your growth includes programs, training curricula, and hands-on experiences. GlobalLogic is recognized for engineering impactful solutions worldwide. Joining our team means working on projects that make a difference, stimulating your curiosity and problem-solving skills. You will engage in cutting-edge solutions that shape the world today. We value balance and flexibility, offering various career paths, roles, and work arrangements to help you achieve a harmonious work-life balance. At GlobalLogic, integrity is key, and we uphold a high-trust environment focused on ethics and reliability. You can trust us to provide a safe, honest, and ethical workplace dedicated to both employees and clients. GlobalLogic, a Hitachi Group Company, is a leading digital engineering partner to top global companies. With a history of digital innovation since 2000, we collaborate with clients to create innovative digital products and experiences, driving business transformation and industry redefinition through intelligent solutions.,
Posted 1 week ago
2.0 - 8.0 years
0 Lacs
karnataka
On-site
You are a Technical Project Manager at IAI Solution Pvt Ltd, a company specializing in applied AI solutions. You will lead software projects, translate business goals into technical roadmaps, and coordinate delivery across frontend and backend teams. Your responsibilities include overseeing deployments using cloud platforms like Azure/AWS/GCP, managing CI/CD pipelines, and ensuring project timelines and resource planning. To excel in this role, you must have 8+ years of software engineering experience, including 2+ years as a Technical Project Manager or Technical Lead with proficiency in JavaScript, Java, Python, and Spring Boot. Experience in cloud and solution architecture, managing technical teams, and Agile Project Management using tools like Jira is essential. Startup experience is preferred, along with strong communication skills and familiarity with DevOps practices. Your technical stack will include technologies such as React.js, Next.js, Python, FastAPI, Django, Spring Boot, Azure, AWS, Docker, Kubernetes, Terraform, PostgreSQL, MongoDB, Redis, Kafka, RabbitMQ, Prometheus, Grafana, and ELK Stack. Good-to-have skills include exposure to AI/ML projects, microservices, performance tuning, and certifications like PMP, CSM, CSPO, SAP Activate, PRINCE2, AgilePM, and ITIL. As part of the team, you will enjoy competitive compensation, performance incentives, and the opportunity to work on high-impact software and AI initiatives in a product-driven, fast-paced environment. Additionally, you will benefit from a flexible work culture, learning support, and health benefits.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
You will be a Cloud Engineer at PAC Panasonic Avionics Corporation based in Pune, India. Your primary responsibility will be to modernize the legacy SOAP-based Airline Gateway (AGW) by building a cloud-native, scalable, and traceable architecture using AWS, Python, and DevOps practices. This will involve migrating from legacy SOAP APIs to modern REST APIs, implementing CI/CD pipelines, containerization, and automation processes to enhance system performance and reliability. Your role will also include backend development, networking, and cloud-based solutions to contribute to scalable and efficient applications. As a Cloud Engineer, your key responsibilities will include designing, building, and deploying cloud-native solutions on AWS, with a focus on migrating from SOAP-based APIs to RESTful APIs. You will develop and maintain backend services and web applications using Python for integration with cloud services and systems. Implementing CI/CD pipelines, automation, and containerization using tools like Docker, Kubernetes, and Terraform will be crucial aspects of your role. You will also utilize Python for backend development, including writing API services, handling business logic, and managing integrations with databases and AWS services. Ensuring scalability, security, and high availability of cloud systems will be essential, along with implementing monitoring and logging solutions for real-time observability. Collaboration with cross-functional teams to integrate cloud-based solutions and deliver high-quality, reliable systems will also be part of your duties. To excel in this role, you should have experience with AWS cloud services and cloud architecture, including EC2, S3, Lambda, API Gateway, RDS, VPC, IAM, CloudWatch, among others. Strong backend development experience with Python, proficiency in building and maintaining web applications and backend services, and solid understanding of Python web frameworks like Flask, Django, or FastAPI are required. Experience with database integration, DevOps tools, RESTful API design, and cloud security best practices is essential. Additionally, familiarity with monitoring tools and the ability to manage cloud infrastructure and deliver scalable solutions are crucial skills for this position. The ideal candidate for this role would have 3 to 5 years of experience and possess additional skills such as experience with airline industry systems, AWS certifications, familiarity with serverless architectures and microservices, and strong problem-solving abilities. If you are passionate about cloud engineering, have a strong background in Python development, and are eager to contribute to the modernization of legacy systems using cutting-edge technologies, we welcome your application for this exciting opportunity at PAC Panasonic Avionics Corporation.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
This position requires an experienced professional with a hands-on EPM infrastructure administration background in the Oracle Hyperion suite of products. Your primary responsibilities will include managing day-to-day operations, providing end-user support, and supporting project initiatives across the EPM platform. You will collaborate closely with functional teams to address complex business issues and design scalable system solutions. Your role will be crucial in establishing a "best-in-class" EPM solution for the company. As the ideal candidate, you should possess a deep understanding of Hyperion concepts and their practical applications in a production business environment. You will be responsible for supporting critical business operations and delivering quality services independently to multiple customer engagements. Your duties will involve analyzing functional and operational issues within the Oracle EPM/Hyperion environment, as well as developing, implementing, and supporting Oracle's global infrastructure. Key Responsibilities: - Analyzing and resolving functional and operational issues within the Oracle EPM/Hyperion environment - Managing operations and end-user support for various Hyperion EPM 11x platform components - Troubleshooting integration and data issues across Hyperion applications and boundary systems - Coordinating operational handover with global support teams - Assisting business partners with application configuration and administration - Monitoring and streamlining tasks and procedures in line with company standards - Collaborating with technical infrastructure teams, application teams, and business units to ensure service delivery meets business objectives - Establishing and implementing standard operational policies and procedures - Coordinating patching activities and monthly deployments with infrastructure teams - Monitoring and optimizing Hyperion system performance - Modifying existing scripts and consolidation/business rules as needed - Participating in system testing, data validations, and stress testing of new features - Managing daily terminations from Hyperion systems per SOX requirements - Creating technical documentation and updating end-user training materials - Providing business continuity and disaster recovery solutions for the Hyperion environment - Improving technical processes and developing new processes to enhance Oracle support for customers - Collaborating with project managers and customer implementation teams to ensure project success - Providing mentoring and cross-training to team members - Setting up new releases of the Hyperion application in the lab environment - Collaborating with offshore team members and global business customers - Demonstrating strong customer management skills and adherence to processes Requirements: - Knowledge of Hyperion/EPM 11.1.2.4, 11.2.x - Experience with various Hyperion EPM components is essential - Hands-on experience with Hyperion installation, upgrading, and migration - Knowledge of Oracle database, WebLogic application server, and performance tuning - Experience with cloud EPM solutions is advantageous - Strong understanding of Hyperion system tools and Windows Server technologies - Bachelor's Degree in Computer Science, Engineering, MIS, or related field Desired Skills: - Knowledge of ServiceNow, SRs, RFCs, and My Oracle Support - Familiarity with OAC-Essbase, Essbase 19c, and Essbase 21c - Implementation experience in DRM - EPMA to DRM migration for Hyperion 11.1.2.x to 11.2.x If you meet the requirements and possess the desired skills, this role offers an opportunity to contribute to the success of the company's EPM platform and work in a challenging yet rewarding environment. Your ability to collaborate effectively, communicate clearly, and drive technical excellence will be key to your success in this role.,
Posted 1 week ago
10.0 years
0 Lacs
Greater Kolkata Area
Remote
Java Back End Engineer with AWS Location : Remote Experience : 10+ Years Employment Type : Full-Time Job Overview We are looking for a highly skilled Java Back End Engineer with strong AWS cloud experience to design and implement scalable backend systems and APIs. You will work closely with cross-functional teams to develop robust microservices, optimize database performance, and contribute across the tech stack, including infrastructure automation. Core Responsibilities Design, develop, and deploy scalable microservices using Java, J2EE, Spring, and Spring Boot. Build and maintain secure, high-performance APIs and backend services on AWS or GCP. Use JUnit and Mockito to ensure test-driven development and maintain code quality. Develop and manage ETL workflows using tools like Pentaho, Talend, or Apache NiFi. Create High-Level Design (HLD) and architecture documentation for system components. Collaborate with cross-functional teams (DevOps, Frontend, QA) as a full-stack contributor when needed. Tune SQL queries and manage performance on MySQL and Amazon Redshift. Troubleshoot and optimize microservices for performance and scalability. Use Git for source control and participate in code reviews and architectural discussions. Automate infrastructure provisioning and CI/CD processes using Terraform, Bash, and pipelines. Primary Skills Languages & Frameworks : Java (v8/17/21), Spring Boot, J2EE, Servlets, JSP, JDBC, Struts Architecture : Microservices, REST APIs Cloud Platforms : AWS (EC2, S3, Lambda, RDS, CloudFormation, SQS, SNS) or GCP Databases : MySQL, Redshift Secondary Skills (Good To Have) Infrastructure as Code (IaC) : Terraform Additional Languages : Python, Node.js Frontend Frameworks : React, Angular, JavaScript ETL Tools : Pentaho, Talend, Apache NiFi (or equivalent) CI/CD & Containers : Jenkins, GitHub Actions, Docker, Kubernetes Monitoring/Logging : AWS CloudWatch, DataDog Scripting : Bash, Shell scripting Nice To Have Familiarity with agile software development practices Experience in a cross-functional engineering environment Exposure to DevOps culture and tools (ref:hirist.tech)
Posted 1 week ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Name: Infrastructure Security Engineer Location- Onsite- Ahmedabad Job Type- Full Time Position Overview We are seeking an experienced Infrastructure Security Engineer to join our cybersecurity team and play a critical role in protecting our organization's digital infrastructure. This position requires a versatile security professional who can operate across multiple domains including cloud security, vulnerability management/patch management, endpoint protection, and security operations. Key Responsibilities AWS Cloud Security Design, implement, and maintain security controls across AWS environments including IAM policies, security groups, NACLs, and VPC configurations Configure and manage AWS security services such as CloudTrail, GuardDuty, Security Hub, Config, and Inspector Implement Infrastructure as Code (IaC) security best practices using CloudFormation, Terraform, or CDK Conduct regular security assessments of cloud architectures and recommend improvements Manage AWS compliance frameworks and ensure adherence to industry standards (SOC 2, ISO 27001, etc.) Vulnerability Management Lead enterprise-wide vulnerability assessment programs using tools such as Nessus Develop and maintain vulnerability and patch management policies, procedures, and SLAs, regular reporting Coordinate with IT and development teams to prioritize and remediate security vulnerabilities Generate executive-level reports on vulnerability metrics and risk exposure Conduct regular penetration testing and security assessments of applications and infrastructure Patch Management Design and implement automated patch management strategies across Windows, Linux, and cloud environments Coordinate with system administrators to schedule and deploy critical security patches Maintain patch testing procedures to minimize business disruption Monitor patch compliance across the enterprise and report on patch deployment status Develop rollback procedures and incident response plans for patch-related issues Endpoint Security Deploy and manage endpoint detection and response (EDR) solutions such as CrowdStrike Configure and tune endpoint security policies including antivirus, application control, and device encryption Investigate and respond to endpoint security incidents and malware infections Implement mobile device management (MDM) and bring-your-own-device (BYOD) security policies Conduct forensic analysis of compromised endpoints when required Required Qualifications Education & Experience Bachelor's degree in computer science, Information Security, or related field Minimum 5+ years of hands-on experience in information security roles 3+ years of experience with AWS cloud security architecture and services Technical Skills Cloud Security: Deep expertise in AWS security services, IAM, VPC security, and cloud compliance frameworks Vulnerability Management: Proficiency with vulnerability scanners (Qualys, Nessus, Rapid7) and risk assessment methodologies Patch Management: Experience with automated patching tools (WSUS, Red Hat Satellite, AWS Systems Manager) Endpoint Security: Hands-on experience with EDR/XDR platforms and endpoint management tools SIEM/SOAR: Advanced skills in log analysis, correlation rule development, and security orchestration Operating Systems: Strong knowledge of Windows and Linux security hardening and administration Security Certifications (Preferred) AWS Certified Security - Specialty CISSP (Certified Information Systems Security Professional) GCIH (GIAC Certified Incident Handler) CEH (Certified Ethical Hacker) Key Competencies Strong analytical and problem-solving skills with attention to detail Excellent communication skills and ability to explain complex security concepts to technical and non-technical stakeholders Project management capabilities with experience leading cross-functional security initiatives Ability to work in fast-paced environments and manage multiple priorities Strong understanding of regulatory compliance requirements (PCI-DSS, HIPAA, SOX, GDPR) Experience with risk assessment frameworks and security governance Reporting Structure This position reports to the Engineering Manager Cyber Security and collaborates closely with IT Operations, Development Teams.
Posted 1 week ago
3.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Full Time Ahmedabad/GiftCity Posted 13 seconds ago Website Trading Technologies Multi-asset platform for capital markets The Site Reliability Engineer (SRE) position is a software development-oriented role, focusing heavily on coding, automation, and ensuring the stability and reliability of our global platform. The ideal candidate will primarily be a skilled software developer capable of participating in on-call rotations. The SRE team develops sophisticated telemetry and automation tools, proactively monitoring platform health and executing automated corrective actions. As guardians of the production environment, the SRE team leverages advanced telemetry to anticipate and mitigate issues, ensuring continuous platform stability. What Will You Be Involved With? Develop and maintain advanced telemetry and automation tools for monitoring and managing global platform health. Actively participate in on-call rotations, swiftly diagnosing and resolving system issues and escalations from the customer support team (this is not a customer-facing role). Implement automated solutions for incident response, system optimization, and reliability improvement. Proactively identify potential system stability risks and implement preventive measures. What Will You Bring to the Table? Software Development 3+ years of professional Python development experience. Strong grasp of Python object-oriented programming concepts and inheritance. Experience developing multi-threaded Python applications. 2+ years of experience using Terraform, with proficiency in creating modules and submodules from scratch. Proficiency or willingness to learn Golang. Operating Systems Experience with Linux operating systems. Strong understanding of monitoring critical system health parameters. Cloud 3+ years of hands-on experience with AWS services including EC2, Lambda, CloudWatch, EKS, ELB, RDS, DynamoDB, and SQS. AWS Associate-level certification or higher preferred. Networking Basic understanding of network protocols: TCP/IP DNS HTTP Load balancing concepts Additional Qualifications (Preferred) Familiarity with trading systems and low-latency environments is advantageous but not required. What We Bring to the Table Compensation: ₹2,000,000 – ₹2,980,801 / year We offer a comprehensive benefits package designed to support your well-being, growth, and work-life balance. Health & Financial Security: Medical, Dental, and Vision coverage Group Life (GTL) and Group Income Protection (GIP) schemes Pension contributions Time Off & Flexibility: Enjoy the best of both worlds: the energy and collaboration of in-person work, combined with the convenience and focus of remote days. This is a hybrid position requiring three days of in-office collaboration per week, with the flexibility to work remotely for the remaining two days. Our hybrid model is designed to balance individual flexibility with the benefits of in-person collaboration, enhanced team cohesion, spontaneous innovation, hands-on mentorship opportunities and strengthens our company culture. 25 days of Paid Time Off (PTO) per year, with the option to roll over unused days. One dedicated day per year for volunteering. Two professional development days per year to allow uninterrupted professional development. An additional PTO day added during milestone anniversary years. Robust paid holiday schedule with early dismissal. Generous parental leave for all parents (including adoptive parents). Work-Life Support & Resources: Budget for tech accessories, including monitors, headphones, keyboards, and other office equipment. Milestone anniversary bonuses. Wellness & Lifestyle Perks: Subsidy contributions toward gym memberships and health/wellness initiatives (including discounted healthcare premiums, healthy meal delivery programs, or smoking cessation support). Our Culture: Forward-thinking, culture-based organization with collaborative teams that promote diversity and inclusion. Trading Technologies is a Software-as-a-Service (SaaS) technology platform provider to the global capital markets industry. The company’s award-winning TT® platform connects to the world’s major international exchanges and liquidity venues in listed derivatives alongside a growing number of asset classes, including fixed income and cryptocurrencies. The TT platform delivers advanced tools for trade execution and order management, market data solutions, analytics, trade surveillance, risk management, and infrastructure services to the world’s leading sell-side institutions, buy-side firms, and exchanges. The company’s blue-chip client base includes Tier 1 banks as well as brokers, money managers, hedge funds, proprietary traders, Commodity Trading Advisors (CTAs), commercial hedgers, and risk managers. These firms rely on the TT ecosystem to manage their end-to-end trading operations. In addition, exchanges utilize TT’s technology to deliver innovative solutions to their market participants. TT also strategically partners with technology companies to make their complementary offerings available to Trading Technologies’ global client base through the TT ecosystem. Trading Technologies (TT) is an equal-opportunity employer. Equal employment has been, and continues to be, a required practice at the Company. Trading Technologies’ practice of equal employment opportunity is to recruit, hire, train, promote, and base all employment decisions on ability rather than race, color, religion, national origin, sex/gender orientation, age, disability, sexual orientation, genetic information or any other protected status. Additionally, TT participates in the E-Verify Program for US offices. To apply for this job please visit tradingtechnologies.pinpointhq.com
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
As an Infrastructure Technical Architect at Salesforce Professional Services, you will play a crucial role in enabling customers to leverage MuleSoft platforms while guiding and mentoring a dynamic team. Your expertise and leadership will help establish you as a subject-matter expert in a company dedicated to innovation. You will bring to the table experience in container technology such as Docker and Kubernetes, as well as proficiency in configuring IaaS services on major cloud providers like AWS, Azure, or GCP. Your strong infrastructure automation skills, including familiarity with tools like Terraform and AWS Cloud Formation, will be key in driving success in this role. Additionally, your knowledge of networking, Linux, systems programming, distributed systems, databases, and cloud computing will be valuable assets. In this role, you will engage with high-level command-line interface-written code languages such as Java or C++, along with dynamic languages like Ruby or Python. Your experience in production-level environments will be essential in providing innovative solutions to complex challenges. Preferred qualifications for this role include certifications in Cloud Architecture or Solution Architecture (AWS, Azure, GCP), as well as expertise in Kubernetes and MuleSoft platforms. Experience with DevSecOps, Gravity/Gravitational, Redhat OpenShift, and operators will be advantageous. Your track record of architecting and implementing highly available, scalable, and secure infrastructure will set you apart as a top candidate. Your ability to troubleshoot effectively, along with hands-on experience in performance testing and tuning, will be critical in delivering high-quality solutions. Strong communication skills and customer-facing experience will be essential in managing expectations and fostering positive relationships with clients. If you are passionate about driving innovation and making a positive impact through technology, this role offers a unique opportunity to grow your career and contribute to transformative projects. Join us at Salesforce, where we empower you to be a Trailblazer and shape the future of business.,
Posted 1 week ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 10+ years. Strong working experience in Data Engineering and Big Data platforms. Hands-on experience with Python and PySpark. Expertise with AWS Glue, including Crawlers and Data Catalog. Hands-on experience with Snowflake. Strong understanding of AWS services: S3, Lambda, Athena, SNS, Secrets Manager. Experience with Infrastructure-as-Code (IaC) tools like CloudFormation and Terraform. Strong experience with CI/CD pipelines, preferably using GitHub Actions. Working knowledge of Agile methodologies, JIRA, and GitHub version control. Experience with data quality frameworks and observability. Exposure to data governance tools and practices. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code. Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the clients’ requirements. Defining guidelines and benchmarks for NFR considerations during project implementation. Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers. Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it. Understanding and relating technology integration scenarios and applying these learnings in projects. Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a member of our dynamic team, you will play a pivotal role in revolutionizing customer relationship management (CRM) by leveraging advanced artificial intelligence (AI) capabilities. The groundbreaking partnership between Salesforce and Google Cloud, valued at $2.5 billion, aims to enhance customer experiences through the integration of Google's Gemini AI models into Salesforce's Agentforce platform. By enabling businesses to utilize multi-modal AI capabilities for processing images, audio, and video, we are paving the way for unparalleled customer interactions. Join us in advancing the integration of Salesforce applications on the Google Cloud Platform (GCP). This is a unique opportunity to work at the forefront of identity provider (IDP), AI, and cloud computing, contributing to the development of a comprehensive suite of Salesforce applications on GCP. You will be instrumental in building a platform on GCP to facilitate agentic solutions on Salesforce. Our Public Cloud engineering teams are at the forefront of innovating and maintaining a large-scale distributed systems engineering platform. Responsible for delivering hundreds of features daily to tens of millions of users across various industries, our teams ensure high reliability, speed, security, and seamless preservation of customizations and integrations with each deployment. If you have deep experience in concurrency, large-scale systems, data management, high availability solutions, and back-end system optimization, we want you on our team. Your Impact: - Develop cloud infrastructure automation tools, frameworks, workflows, and validation platforms on public cloud platforms like AWS, GCP, Azure, or Alibaba - Design, develop, debug, and operate resilient distributed systems spanning thousands of compute nodes across multiple data centers - Utilize and contribute to open-source technologies such as Kubernetes, Argo, etc. - Implement Infrastructure-as-Code using Terraform - Create microservices on containerization frameworks like Kubernetes, Docker, Mesos - Resolve complex technical issues, drive innovations to enhance system availability, resilience, and performance - Maintain a balance between live-site management, feature delivery, and technical debt retirement - Participate in on-call rotation to address real-time complex problems and ensure services are operational and highly available Required Skills: - Proficiency in Terraform, Kubernetes, or Spinnaker - Deep knowledge of programming languages such as Java, Golang, Python, or Ruby - Working experience with Falcon - Ownership and operation of critical service instances - Experience with Agile development and Test Driven Development methodologies - Familiarity with essential infrastructure services including monitoring, alerting, logging, and reporting applications - Preferred experience with Public Cloud platforms If you are passionate about cutting-edge technologies, thrive in a fast-paced environment, and are eager to make a significant impact in the world of CRM and AI, we welcome you to join our team.,
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for contributing to the development and deployment of machine learning algorithms. Evaluates accuracy and functionality of machine learning algorithms as a part of a larger team. Contributes to translating application requirements into machine learning problem statements. Analyzes and evaluates solutions both internally generated as well as third party supplied. Contributes to developing ways to use machine learning to solve problems and discover new products, working on a portion of the problem and collaborating with more senior researchers as needed. Works with moderate guidance in own area of knowledge. Job Description Core Responsibilities About the Role: We are seeking an experienced Data Scientist to join our growing Operational Intelligence team. You will play a key role in building intelligent systems that help reduce alert noise, detect anomalies, correlate events, and proactively surface operational insights across our large-scale streaming infrastructure. You’ll work at the intersection of machine learning, observability, and IT operations, collaborating closely with Platform Engineers, SREs, Incident Managers, Operators and Developers to integrate smart detection and decision logic directly into our operational workflows. This role offers a unique opportunity to push the boundaries of AI/ML in large-scale operations. We welcome curious minds who want to stay ahead of the curve, bring innovative ideas to life, and improve the reliability of streaming infrastructure that powers millions of users globally. What You’ll Do Design and tune machine learning models for event correlation, anomaly detection, alert scoring, and root cause inference Engineer features to enrich alerts using service relationships, business context, change history, and topological data Apply NLP and ML techniques to classify and structure logs and unstructured alert messages Develop and maintain real-time and batch data pipelines to process alerts, metrics, traces, and logs Use Python, SQL, and time-series query languages (e.g., PromQL) to manipulate and analyze operational data Collaborate with engineering teams to deploy models via API integrations, automate workflows, and ensure production readiness Contribute to the development of self-healing automation, diagnostics, and ML-powered decision triggers Design and validate entropy-based prioritization models to reduce alert fatigue and elevate critical signals Conduct A/B testing, offline validation, and live performance monitoring of ML models Build and share clear dashboards, visualizations, and reporting views to support SREs, engineers, and leadership Participate in incident postmortems, providing ML-driven insights and recommendations for platform improvements Collaborate on the design of hybrid ML + rule-based systems to support dynamic correlation and intelligent alert grouping Lead and support innovation efforts including POCs, POVs, and exploration of emerging AI/ML tools and strategies Demonstrate a proactive, solution-oriented mindset with the ability to navigate ambiguity and learn quickly Participate in on-call rotations and provide operational support as needed Qualifications Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics or a related field 3+ years of experience building and deploying ML solutions in production environments 2+ years working with AIOps, observability, or real-time operations data Strong coding skills in Python (including pandas, NumPy, Scikit-learn, PyTorch, or TensorFlow) Experience working with SQL, time-series query languages (e.g., PromQL), and data transformation in pandas or Spark Familiarity with LLMs, prompt engineering fundamentals, or embedding-based retrieval (e.g., sentence-transformers, vector DBs) Strong grasp of modern ML techniques including gradient boosting (XGBoost/LightGBM), autoencoders, clustering (e.g., HDBSCAN), and anomaly detection Experience managing structured + unstructured data, and building features from logs, alerts, metrics, and traces Familiarity with real-time event processing using tools like Kafka, Kinesis, or Flink Strong understanding of model evaluation techniques including precision/recall trade-offs, ROC, AUC, calibration Comfortable working with relational (PostgreSQL), NoSQL (MongoDB), and time-series (InfluxDB, Prometheus) databases Ability to collaborate effectively with SREs, platform teams, and participate in Agile/DevOps workflows Clear written and verbal communication skills to present findings to technical and non-technical stakeholders Comfortable working across Git, Confluence, JIRA, & collaborative agile environments Nice To Have Experience building or contributing to the AIOps platform (e.g., Moogsoft, BigPanda, Datadog, Aisera, Dynatrace, BMC etc.) Experience working in streaming media, OTT platforms, or large-scale consumer services Exposure to Infrastructure as Code (Terraform, Pulumi) and modern cloud-native tooling Working experience with Conviva, Touchstream, Harmonic, New Relic, Prometheus, & event- based alerting tools Hands-on experience with LLMs in operational contexts (e.g., classification of alert text, log summarization, retrieval-augmented generation) Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) and embeddings-based search for observability data Experience using MLflow, SageMaker, or Airflow for ML workflow orchestration Knowledge of LangChain, Haystack, RAG pipelines, or prompt templating libraries Exposure to MLOps practices (e.g., model monitoring, drift detection, explainability tools like SHAP or LIME) Experience with containerized model deployment using Docker or Kubernetes Use of JAX, Hugging Face Transformers, or LLaMA/Claude/Command-R models in experimentation Experience designing APIs in Python or Go to expose models as services Cloud proficiency in AWS/GCP, especially for distributed training, storage, or batch inferencing Contributions to open-source ML or DevOps communities, or participation in AIOps research/benchmarking efforts Certifications in cloud architecture, ML engineering, or data science specialization Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 2-5 Years
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40005 Jobs | Dublin
Wipro
19416 Jobs | Bengaluru
Accenture in India
16187 Jobs | Dublin 2
EY
15356 Jobs | London
Uplers
11435 Jobs | Ahmedabad
Amazon
10613 Jobs | Seattle,WA
Oracle
9462 Jobs | Redwood City
IBM
9313 Jobs | Armonk
Accenture services Pvt Ltd
8087 Jobs |
Capgemini
7830 Jobs | Paris,France