Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
35.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description We make digital 𝐡𝐮𝐦𝐚𝐧™ by combining human-centered design with real-time Analytics, AI, Cognitive Technology & Automation to create exceptionally engineered Brand Experiences! Sutherland is an experience-led digital transformation company. Our mission is to deliver exceptionally engineered experiences for customers and employees today, that continue to delight tomorrow. For over 35 years, we have cared for our customers’ customers, delivering measurable results and accelerating growth. Our proprietary, AI-based products and platforms are built using robust IP and automation. We are a team of global professionals, operationally effective, culturally meshed, and committed to our clients and to one another. We call it One Sutherland. #MakeDigitalHuman. https://www.sutherlandglobal.com/ https://www.linkedin.com/posts/sutherland-global_sutherland-india-is-certified-as-a-great-activity-6914801835199385600-wvHQ?utm_source=linkedin_share&utm_medium=member_desktop_web Job Description Sutherland is seeking an attentive and analytical person to join us as an AI Architect to be responsible for overseeing the quality of calls and customer service delivery. We are a group of dynamic and driven individuals. If you are looking to build a fulfilling career and are confident you have the skills and experience to help us succeed, we want to work with you! Responsibilities : Keep management updated: Relay vital information in the form of timely and accurate reports Impact the bottom line: Produce solid and effective strategies based on accurate and meaningful data reports and analysis and/or keen observations Define Sutherland’s reputation: Oversee and manage performance and service quality to guarantee customer satisfaction; provide coaching and feedback to CSRs Strengthen relationships: Establish and maintain communication with clients and/or team members; understand needs, resolve issues, and meet expectations Take the lead: Monitor calls/chats/e-mails as per defined sample plan and targets; report out any ZT cases and take necessary action; perform RCA and take corrective actions for defects identified during monitoring; drive quality awareness and improvement initiatives across program Qualifications To succeed in this position, you must: Qualifications & Experience: Bachelor's or master’s degree in computer science, Engineering or a related field. Extensive experience in AI implementation incorporating generative models and agentic AI workflows. Solid understanding of machine learning techniques and algorithms. Strong knowledge of data structures, algorithms, and software design principles. Healthcare domain experience is preferred. Key Skills: Excellent problem-solving skills and attention to detail. Effective communication and collaboration abilities. Ability to thrive in a fast-paced, dynamic environment and adapt to changing priorities. Other Considerations: Location: Lanco, Hyderabad Timings: 11:30 AM to 8:30 PM IST Work Mode: Work from Office/Hybrid Notice Period: Immediate Joiner Additional Information Roles & Responsibilities – AI Architect/Manager Lead the architecture, development, deployment and implementation of AI systems incorporating generative models and agentic AI workflows, utilizing at least one cloud platform (Azure or GCP). Build and optimize GenAI applications, including chatbots, copilots and AI agents using Python. Collaborate with product teams to integrate AI solutions into platforms and customer-facing applications. Evaluate and select models (e.g. OpenAI, Gemini, LLaMA etc.) and orchestration tools (e.g., LangChain, AutoGen) best suited to solution needs. Demonstrate proficiency in vector databases (e.g., FAISS, Pinecone etc.) and the design of RAG (Retrieval-Augmented Generation) pipelines. Stay abreast of advancements in GenAI, multi-agent systems, vector search technologies and large-scale retrieval-augmented architectures. Define and uphold best practices for AI governance, cost optimization and robust performance monitoring. Conduct ongoing performance tuning, debugging and root-cause analysis to ensure reliability of AI models and applications. Design and maintain comprehensive monitoring and logging systems for AI-powered solutions. Collaborate with cross-functional teams — including data scientists, ML engineers and software developers — to seamlessly integrate AI across the product landscape. Document technical specifications, system workflows and development best practices.
Posted 4 days ago
9.0 years
0 Lacs
India
Remote
Job Description JB-4: Senior Lead Engineer Our Purpose: At Majesco, we believe in connecting people and business to Insurance in ways that are Innovative, Hyper-Relevant, Compelling and Personal. We bring together the brightest minds to build the future of Insurance; a world where Insurance makes life and business easier, more connected, and better protected. We are seeking a Senior Lead Engineer to deliver platform-scale automation across insurance domains by integrating advanced AI tools, orchestrated workflows, and modular service components. This role sits at the intersection of cloud-native backend engineering, AI-driven experiences, and cross-functional automation. All About the Role: Lead the development and scaling of document ingestion pipelines, classifier engines, and API gateways supporting intelligent P&C workflows. Build modular backend services using FastAPI, Django, or Flask, leveraging asynchronous design, microservices, and cloud-native scalability. Design and deploy event-driven automation using Azure Logic Apps, Functions, and Service Bus across claims, billing, and policy processes. Containerize services using Docker, deploy and scale on Kubernetes, and ensure high availability through best practices. Integrate platform services with Microsoft Copilot Studio and Power Automate, enabling reusable actions, conversational agents, and business rule orchestration. Establish telemetry, traceability, and structured logging standards across APIs and workflows using Azure Monitor, App Insights, and OpenTelemetry. Drive performance profiling and system optimization initiatives across ingestion, classification, and agent orchestration layers. Explore and integrate AI capabilities such as voice embeddings, vector search, and immersive UI elements into the platform. Participate actively in PI planning, backlog grooming, and agile ceremonies across engineering and product teams. Mentor junior developers and lead sprint-level technical delivery with a focus on modularity, scalability, and AI-readiness. What You’ll Bring: Passion for staying ahead of AI developments and a builder mindset to turn AI capabilities into practical applications. Demonstrated ability to design systems with observability, orchestration, and automation at the core. A strong performance-first philosophy — ability to analyze, profile, and optimize services at scale. Vision for integrating AI into core insurance workflows, from agent recommendations to customer-facing explainability. Willing to work across time zones and with remote teams. All About You: 9+ years of experience in backend platform development and cloud-native architecture. Strong knowledge of FastAPI, Django, or Flask and event-driven microservice design. Minimum 5 years of experience on frontend framework - HTML5, React JS, Node JS, JavaScript, Angular Exposure to cloud applications including DevOps/DevSecOps, Scaling, Deployment and Automation; Cloud Exposure - MS Azure/AWS, OpenShift, Docker, Kubernetes, Jenkins, GitHub, Jira Hands-on experience with Azure Cloud Services including Logic Apps, Functions, Cosmos DB, Blob Storage, and Service Bus. Proficient with Docker and Kubernetes for containerized deployment and service scaling. Experience building intelligent orchestration workflows using Power Automate and Copilot Studio. Working knowledge of vector databases, embedding APIs, and LLM integration workflows (OpenAI/Azure OpenAI). Exposure to AI-enhanced UIs — such as embedded assistants, predictive agents, or conversational UI. Proficient in system performance optimization, error tracing, logging frameworks, and monitoring pipelines. Experience working in agile teams with PI planning, story-pointing, sprint demos, and cross-functional delivery. P&C insurance domain familiarity is preferred but not mandatory. Other Qualifications: Bachelor’s degree in computer science or engineering; master’s degree a plus Experience with SAFe Agile Development practices and processes. SAFe Practitioner certification preferred. Experience with the Majesco Platforms/Products is a plus Preferred Experience in developing packaged software (Products) preferably in the banking or financial services areas.
Posted 4 days ago
0 years
0 Lacs
India
Remote
This is a remote position. About Us Simbian® is building Agentic AI platform for cybersecurity. Founded by repeat successful security founders, we have gathered an excellent cohort of employees, partners, and customers. Our mission is to solve security using AI and our core values are excellence, replication, and intellectual honesty. Our promise is to make Simbian the best workplace of your career and we believe a small group of thoughtful passionate people can make all the positive difference in the world. To fuel our fast growth, we are seeking an exceptional candidate who shares our core values of excellence (being the world's best at our craft), replication (share your best ideas with others), and intellectual honesty (tell the truth even if it's bitter). Our AI Agents automate security operations and provide our customers 10x leverage. Our customers include some of the world's largest companies. Our initial use cases include: SOC alert triage and investigation Prioritization and classification of vulnerabilities AI based threat hunting What you’ll do: Define DevOps strategy for cloud-native applications and infrastructure. Lead and mentor DevOps engineers and SREs across projects. Establish and enforce best practices for CI/CD, infrastructure automation, and monitoring. Align DevOps efforts with business goals and cloud cost optimization. Recommend right-sizing of infrastructure and implement auto-scaling strategies. Design and manage scalable, secure, and highly available cloud infrastructure (AWS, Azure, GCP). Implement Infrastructure as Code (IaC) using Terraform, CloudFormation etc Manage multi-cloud or hybrid environments, ensuring resilience and compliance. Architect and manage CI/CD pipelines for automated testing, integration, and deployment. Promote automation-first approaches in infrastructure provisioning, configuration, and release workflows. Integrate security checks (DevSecOps) into the CI/CD process. Oversee observability stack: monitoring, alerting, tracing, and logging tools (e.g., Prometheus, Grafana, ELK). Define and track SLA/SLO/SLI metrics.\ Lead incident response and postmortems for critical production issues. Implement cloud security best practices, including secrets management, IAM, encryption, and network policies. Ensure compliance with regulatory standards (SOC2) through automation and audits Oversee release cycles, rollout strategies (blue/green, canary), Manage Dev/staging/production environments for consistency and reliability. Work closely with development, QA, product, and security teams to ensure smooth delivery. Advocate for a DevOps culture: transparency, collaboration, and continuous improvement. What You’ll Get: High autonomy and visibility with leadership and founders A collaborative, transparent work culture with a bias for action Competitive salary with generous equity in a potentially large company Opportunity to work with some of the world's most talented and friendliest teammates
Posted 4 days ago
3.0 years
0 Lacs
Jaipur, Rajasthan, India
Remote
Job Summary Auriga is looking for a Software Engineer who can develop and deploy APIs and Web applications using Java MVC Frameworks and power a variety of leading-edge digital products. You’ll need to bring creative thinking and architectural problem solving to the table, to devise optimal technical solutions, along with highly performant user experiences. Responsibilities Work with business users to gather functional requirements Combine your technical expertise and problem-solving passion to turn complex problems into end-to-end solutions Work with client architect/senior developers to do high level/low level design/architecture. Design and implement high-quality, test-driven BE code for various projects Unit Testing/Integration Testing Code Configuration and Release Management. Create and maintain documentation, implement and follow best practices for development workflow. Work collaboratively with team members to ensure deadlines are met. Stay current on changes in technology and keep adding to your skillset. Qualifications Minimum 3 Years of experience in Web Application and API development in Java 8 and above Working experience with MVC frameworks like Spring, Play, etc. Experience with Multi-threading, Collections, and concurrent API Working experience with web-services and APIs (REST, SOAP) Working experience with data platforms (relational and/or NoSQL) and messaging technologies Excellent OOPs, data structure, and algorithm knowledge Understanding & experience in API management, Swagger Working knowledge of API Testing Tools (e.g. Postman), Version control systems like GIT. Working experience with LINUX/UNIX environment and shell scripts Proficiency in English Strong collaborator and comfortable to work in an agile, remote and distributed team environment Follow secure coding practices and ensure data protection, authentication, and authorization mechanisms are implemented effectively (e.g., OAuth2, JWT). Knowledge of OWASP Top 10 and implementation of security controls in APIs. Nice to have Experience in one or more front-end development technologies Experience in developing microservices in Spring Boot. Experience writing high-quality code with fully automated unit test coverage (Junit, Mockito, etc.) Experience defining and applying design/coding standards, patterns, and quality metrics depending on the solution Working experience with various CI/CD systems (Jenkins, Docker, Kubernetes) and build tools (ant, maven, gradle, etc.). Working experience creating high performing applications, including profiling and tuning to improve performance Experience with application logging and monitoring using tools like ELK Stack, Prometheus, Grafana, or New Relic Experience in Scrum/Agile Knowledge of public cloud infrastructures (AWS, Azure, GCP) Knowledge of one or more security or integration framework (PING, Octa) Familiarity with services such as S3, Lambda, EC2, IAM, CloudWatch, or RDS is a plus. Understanding of API rate limiting, request throttling, caching strategies (e.g., Redis), and gateway tools like Kong, Apigee, or AWS API Gateway. Ability to take full ownership of assigned modules or projects with minimal supervision. About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom, ICICI and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In! Our Website
Posted 4 days ago
3.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Key Responsibilities: Design, develop, and optimize high-performance backend services using Rust , targeting 1000+ orders per second throughput. Implement scalable architectures with load balancing for high availability and minimal latency. Integrate and optimize Redis for caching, pub/sub, and data persistence. Work with messaging services like Kafka and RabbitMQ to ensure reliable, fault-tolerant communication between microservices. Develop and manage real-time systems with WebSockets for bidirectional communication. Write clean, efficient, and well-documented code with unit and integration tests. Collaborate with DevOps for horizontal scaling and efficient resource utilization. Diagnose performance bottlenecks and apply optimizations at the code, database, and network level. Ensure system reliability, fault tolerance, and high availability under heavy loads. Required Skills & Experience: 3+ years of professional experience with Rust in production-grade systems. Strong expertise in Redis (clustering, pipelines, Lua scripting, performance tuning). Proven experience with Kafka , RabbitMQ , or similar messaging queues. Deep understanding of load balancing, horizontal scaling , and distributed architectures. Experience with real-time data streaming and WebSocket implementations. Knowledge of system-level optimizations, memory management, and concurrency in Rust. Familiarity with high-throughput, low-latency systems and profiling tools. Understanding of cloud-native architectures (AWS, GCP, or Azure) and containerization (Docker/Kubernetes). Preferred Qualifications: Experience with microservices architecture and service discovery . Knowledge of monitoring & logging tools (Prometheus, Grafana, ELK). Exposure to CI/CD pipelines for Rust-based projects. Experience in security and fault-tolerant design for financial or trading platforms (nice to have).
Posted 4 days ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Summary We are seeking a skilled and innovative Cloud Engineer to join our team. As a Cloud Engineer, you will be responsible for developing and maintaining cloud-based solutions, with a focus on coding complex problems, automation using Golang and Python, and collaborating with the Site Reliability Engineering (SRE) team for feature deployment in production. Additionally, the ideal candidate should be proficient in utilizing AI tools like Copilot to enhance productivity in the areas of automation, documentation, and unit test writing. Responsibilities Develop, test, and maintain cloud-based applications and services using Golang and Python. Write clean, efficient, and maintainable code to solve complex problems and improve system performance. Collaborate with cross-functional teams to understand requirements and design scalable and secure cloud solutions. Automate deployment, scaling, and monitoring of cloud-based applications and infrastructure. Work closely with the SRE team to ensure smooth feature deployment in production environments. Utilize AI tools like Copilot to enhance productivity in automation, documentation, and unit test writing. Troubleshoot and resolve issues related to cloud infrastructure, performance, and security. Stay up to date with emerging technologies and industry trends to continuously improve cloud-based solutions. Participate in code reviews, knowledge sharing sessions, and contribute to the improvement of development processes. Job Requirements Strong programming skills in Golang and Python. Proficiency in using AI tools like Copilot to enhance productivity in automation, documentation, and unit test writing. Solid understanding of cloud computing concepts and services (e.g., AWS, Azure, Google Cloud). Experience with containerization technologies (e.g., Docker, Kubernetes) and infrastructure-as-code tools (e.g., Terraform, CloudFormation). Proficient in designing and implementing RESTful APIs and microservices architectures. Familiarity with CI/CD pipelines and tools (e.g., Jenkins, GitLab CI/CD). Knowledge of networking concepts, security best practices, and system administration. Excellent problem-solving skills and ability to work in a fast-paced, collaborative environment. Strong communication and interpersonal skills to effectively collaborate with cross-functional teams. Preferred Skills Experience with other programming languages, such as Java, C++, or Ruby. Knowledge of database technologies (e.g., SQL, NoSQL) and data storage solutions. Familiarity with monitoring and logging tools (e.g., Prometheus, ELK stack). Understanding of Agile/Scrum methodologies and DevOps principles. Certifications in cloud technologies (e.g., AWS Certified Cloud Practitioner, Google Cloud Certified - Associate Cloud Engineer) would be a plus. If you are passionate about cloud technologies, have a strong problem-solving mindset, and enjoy working in a collaborative environment, we would love to hear from you. Join our team and contribute to building scalable, reliable, and secure cloud solutions. Please note that this job description is not exhaustive and may change based on the organization's needs. Education A Bachelor of Science Degree in Engineering or Computer Science with 2 years of experience, or a Master’s Degree; or equivalent experience is typically required. All internal movements within the Product Group via requisition will be lateral, offering valuable growth opportunities to extend your skills in a new area. Opportunities for a promotion will be reviewed in the normal course of business, aligned with our promotion process. At NetApp, we embrace a hybrid working environment designed to strengthen connection, collaboration, and culture for all employees. This means that most roles will have some level of in-office and/or in-person expectations, which will be shared during the recruitment process. Equal Opportunity Employer NetApp is firmly committed to Equal Employment Opportunity (EEO) and to compliance with all laws that prohibit employment discrimination based on age, race, color, gender, sexual orientation, gender identity, national origin, religion, disability or genetic information, pregnancy, and any protected classification. Why NetApp? We are all about helping customers turn challenges into business opportunity. It starts with bringing new thinking to age-old problems, like how to use data most effectively to run better - but also to innovate. We tailor our approach to the customer's unique needs with a combination of fresh thinking and proven approaches. We enable a healthy work-life balance. Our volunteer time off program is best in class, offering employees 40 hours of paid time off each year to volunteer with their favourite organizations. We provide comprehensive benefits, including health care, life and accident plans, emotional support resources for you and your family, legal services, and financial savings programs to help you plan for your future. We support professional and personal growth through educational assistance and provide access to various discounts and perks to enhance your overall quality of life. If you want to help us build knowledge and solve big problems, let's talk.
Posted 4 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Sr Associate, Application Support About Your Role At Fiserv, we are committed to providing exceptional service and support to our clients. As an Application Support - Sr Associate II, you will be part of a dedicated team ensuring the smooth operation and maintenance of critical business applications. This role involves diagnosing and resolving technical issues, providing guidance to end-users, and collaborating with various teams to improve application performance and reliability. What You'll Do Provide support for business applications, ensuring maximum uptime and performance. Troubleshoot and resolve application issues, collaborating with development teams as needed. Monitor application performance and recommend improvements to enhance efficiency. Document support activities, maintain detailed logs, and develop user guides. Responsibilities listed are not intended to be all-inclusive and may be modified as necessary. Experience You'll Need To Have 5+ years of experience in application support 5+ year(s) of experience in JAVA/J2EE and spring boot Should have experience in troubleshooting and resolving technical issues Should have working knowledge in oracle and SQL Should have knowledge in GIT and Shell script Experience That Would Be Great To Have Familiarity with monitoring and logging tools Familiarity with REACT Experience with automation and scripting languages Thank You For Considering Employment With Fiserv. Please Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our Commitment To Diversity And Inclusion Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note To Agencies Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning About Fake Job Posts Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Posted 4 days ago
9.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Job Description This is a remote position. Job Summary We are looking for an experienced Senior Data Engineer to lead the development of scalable AWS-native data lake pipelines with a strong focus on time series forecasting and upsert-ready architectures. This role requires end-to-end ownership of the data lifecycle, from ingestion to partitioning, versioning, and BI delivery. The ideal candidate must be highly proficient in AWS data services, PySpark, versioned storage formats like Apache Hudi/Iceberg, and must understand the nuances of data quality and observability in large-scale analytics systems. Responsibilities Design and implement data lake zoning (Raw → Clean → Modeled) using Amazon S3, AWS Glue, and Athena. Ingest structured and unstructured datasets including POS, USDA, Circana, and internal sales data. Build versioned and upsert-friendly ETL pipelines using Apache Hudi or Iceberg. Create forecast-ready datasets with lagged, rolling, and trend features for revenue and occupancy modeling. Optimize Athena datasets with partitioning, CTAS queries, and metadata tagging. Implement S3 lifecycle policies, intelligent file partitioning, and audit logging. Build reusable transformation logic using dbt-core or PySpark to support KPIs and time series outputs. Integrate robust data quality checks using custom logs, AWS CloudWatch, or other DQ tooling. Design and manage a forecast feature registry with metrics versioning and traceability. Collaborate with BI and business teams to finalize schema design and deliverables for dashboard consumption. Requirements Essential Skills: Job Deep hands-on experience with AWS Glue, Athena, S3, Step Functions, and Glue Data Catalog. Strong command over PySpark, dbt-core, CTAS query optimization, and partition strategies. Working knowledge of Apache Hudi, Iceberg, or Delta Lake for versioned ingestion. Experience in S3 metadata tagging and scalable data lake design patterns. Expertise in feature engineering and forecasting dataset preparation (lags, trends, windows). Proficiency in Git-based workflows (Bitbucket), CI/CD, and deployment automation. Strong understanding of time series KPIs, such as revenue forecasts, occupancy trends, or demand volatility. Data observability best practices including field-level logging, anomaly alerts, and classification tagging. Personal Independent, critical thinker with the ability to design for scale and evolving business logic. Strong communication and collaboration with BI, QA, and business stakeholders. High attention to detail in ensuring data accuracy, quality, and documentation. Comfortable interpreting business-level KPIs and transforming them into technical pipelines. Preferred Skills Job Experience with statistical forecasting frameworks such as Prophet, GluonTS, or related libraries. Familiarity with Superset or Streamlit for QA visualization and UAT reporting. Understanding of macroeconomic datasets (USDA, Circana) and third-party data ingestion. Personal Proactive, ownership-driven mindset with a collaborative approach. Strong communication and collaboration skills. Strong problem-solving skills with attention to detail. Have the ability to work under stringent deadlines and demanding client conditions. Strong analytical and problem-solving skills. Ability to work in fast-paced, delivery-focused environments. Strong mentoring and documentation skills for scaling the platform. Other Relevant Information Bachelor’s degree in Computer Science, Information Technology, or a related field. Minimum 9+ years of experience in data engineering & architecture. Benefits This role offers the flexibility of working remotely in India. LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#6875E2;border-color:#6875E2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered="">
Posted 4 days ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Responsibilities As a member of the incident/Workorder/Change handling team , you will have the following accountabilities: Will be working as an SME for Zscaler Support in Operations for ZIA, ZPA and ZDX. Assess and orchestrate the current and planned security posture for NTT data’s Security infrastructure, providing recommendations for improvement and risk reduction. Identify and propose process improvements and identify opportunities for new processes and procedures to reduce risk. Support security incident response as required; First line responder to reported or detected incidents. Perform security research, analysis, security vulnerability assessments and penetration tests. Provide security audit and investigation support Monitor and track security systems for Vulnerability and respond to potential security Vulnerability. Provide support for the Vulnerability management program. Provide 24x7 support as operations team working in shifts. Participate in on-call system administration support including but not limited to weekends, holidays and after-business hours as required to service the needs of the business. Skills And Experience 4 to 5 years+ in Information Security space. Strong experiance in Service Now Ticketing tool, Dashboards and Integration. Strong experience with Zscaler ZIA, ZPA and ZDX. Strong experience with Vulnerability Management Program. Strong experience with Qualys Vulnerability Management Tool. Some good to have Experience with Crowdstrike EDR and SIEM. Strong experience with multiple network operating systems, including two or more of the following: Cisco iOS, Juniper ScreenOS or Junos, Fortinet FortiOS, CheckPoint GAiA, or Palo Alto Networks PAN-OS; Tanium, Rapid 7, Nessus, Nitro ESM, Symantec SEP, Symantec Message labs, Thales encryption, Allgress, Forecpoint, Blue coat, Firepower, Cisco ISE, Carbon Black, Titus, Encase Strong oral, written, and presentation abilities. Experiance with M365 Copilot. Some experience with Unix/Linux system administration. Strong experience with logging and alerting platforms, including SIEM integration. Current understanding of Industry trends and emerging threats; and Working Knowledge of incident response methodologies and technologies. Desirable Zscaler Certifications Associate and Professional for ZIA, ZPA and ZDX. Excellent Experiance in Zscaler ZIA, ZPA and ZDX. Experiance in Vulnerability Management Program. Experiance in Qualys Vulnerability Management Tool. Well-rounded background in network, host, database, and application security. Experience implementing security controls in a bi-modal IT environment. Experience driving a culture of security awareness. Experience administering network devices, databases, and/or web application servers. Professional IT Accreditations (CISM, CCSA, CCSE, JNCIA, CCNA, CISSP, CompTIA Security) Good to have. Abilities Non customer facing role but an ability to build strong relationships with internal teams, and security leadership, is essential act as Incident co-ordinator, for reviewing all security tools, ingesting incident data, tracking incident status, coordinating with internal and external assets to fulfill information requirements, and initiating escalation procedures. Document daily work and new processes. Embrace a culture of continuous service improvement and service excellence. Stay up to date on security industry trends.
Posted 4 days ago
0 years
0 Lacs
Gandhinagar, Gujarat, India
On-site
Job Title: Site Reliability Engineer Location: InfoCity, Gandhinagar, India (On-site) About AGIL f(x): AGIL f(x) is a pioneering force in the Life Sciences industry, dedicated to transforming enterprise teams with bespoke AI-powered business systems. We specialize in designing intelligent solutions that automate complex workflows and enhance decision-making by replicating human reasoning, significantly reducing manual effort, and accelerating operational speed. Our core offerings span critical areas including Quality Management Systems, Regulatory and Clinical Platforms, Safety and Compliance Tools, and Medical and Commercial Systems. We pride ourselves on being strategic AI partners, blending profound business understanding with deep technical expertise to deliver secure, scalable systems engineered for enduring success. About the Role: We are seeking a highly skilled and proactive Site Reliability Engineer (SRE) to join our growing team in Gandhinagar. This is a crucial full-time, on-site role where you will be instrumental in ensuring the continuous reliability, scalability, and performance of our cutting-edge AI-powered business systems. As an SRE at AGIL f(x), you will apply software engineering principles to operations, proactively identifying and resolving potential issues, automating infrastructure, and driving operational excellence to support our mission of empowering Life Sciences organizations. Key Responsibilities: System Reliability & Performance: Take ownership of the reliability, availability, and performance of our AI-powered business systems, ensuring they meet defined Service Level Objectives (SLOs). Monitoring & Alerting: Design, implement, and maintain robust tracking, logging, and alerting solutions to provide deep visibility into system health, performance, and user experience. This includes leveraging tools to track key metrics and set up proactive alerts. Troubleshooting & Incident Management: Lead and participate in the diagnosis, troubleshooting, and resolution of complex technical issues across our infrastructure and applications. Drive incident post-mortems (RCAs) to identify root causes and implement preventative measures and automation to reduce future occurrences. Infrastructure Management: Manage, maintain, and evolve our cloud infrastructure (specific cloud provider experience, e.g., AWS, Azure, GCP, to be added if known), ensuring scalability, security, and efficiency. This includes provisioning, configuring, and optimizing resources. Software Development for Operations: Develop, test, and deploy software solutions and automation scripts (e.g., Python, Go, Bash) to eliminate manual toil, improve operational efficiency, and enhance system resilience. This could involve building custom tools, integrating systems, or automating deployment processes. Deployment & Release Management: Collaborate with development teams to establish and optimize CI/CD pipelines, ensuring smooth, reliable, and frequent deployments of new features and bug fixes with minimal downtime. System Design & Architecture: Provide input into system design and architecture decisions, advocating for reliability, scalability, and operational maintainability from the outset. Collaboration & Communication: Work closely with development, product, and QA teams to understand system requirements, anticipate operational challenges, and foster a culture of shared ownership for system reliability. Communicate effectively with stakeholders during incidents and on long-term initiatives. Documentation: Create and maintain comprehensive documentation, runbooks, and playbooks for system configurations, operational procedures, and troubleshooting guides. Qualifications: Education: Bachelor's degree in Computer Science, Information Technology, or a related technical field. Experience: Proven experience in Site Reliability Engineering, DevOps, or a similar role focused on maintaining and improving system reliability and performance. System Administration: Strong proficiency in Linux/Unix system administration, including shell scripting. Troubleshooting Expertise: Excellent diagnostic and troubleshooting skills across distributed systems, networks, and applications. Infrastructure Management: Solid understanding of cloud computing concepts and hands-on experience with at least one primary cloud provider (e.g., AWS, Azure, GCP). Software Development: Demonstrated software development skills in at least one high-level programming language (e.g., Python, Go, Java, C#), with an emphasis on writing reliable, maintainable code for automation and operational tooling. Problem-Solving: Exceptional analytical and problem-solving abilities with meticulous attention to detail. Collaboration & Communication: Strong interpersonal skills with the ability to collaborate effectively with cross-functional teams and communicate complex technical information clearly. Proactive Mindset: A strong commitment to continuous improvement, automation, and a "fix it once" mentality. Bonus Points (Preferred Qualifications): Experience working with AI-powered systems, machine learning pipelines, or data-intensive applications. Familiarity with containerization technologies (e.g., Docker, Kubernetes). Experience with Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation, Ansible). Knowledge of various monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack, Datadog, Splunk). Experience with relational and/or NoSQL databases. Understanding of networking concepts (TCP/IP, DNS, Load Balancing, Firewalls). Why Join AGIL f(x)? Be part of a rapidly growing company at the forefront of AI innovation in the Life Sciences industry. Work on impactful projects that directly contribute to automating and optimizing critical business processes for our clients. Collaborate with a team of brilliant minds combining deep technical expertise with specialized industry knowledge. Opportunity to work with cutting-edge technologies and shape the future of AI-powered enterprise systems. A challenging yet rewarding environment that encourages continuous learning and professional growth.
Posted 4 days ago
6.0 - 11.0 years
8 - 13 Lacs
Pune
Work from Office
Job Description Some careers shine brighter than others, If youre looking for a career that will help you stand out, join HSBC and fulfil your potential Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further, HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions, We are currently seeking an experienced professional to join our team in the role of Senior Consultant specialist In this role, you will: Architectural Leadership: Design and define the architectural roadmap for the Digital Business Banking platform, ensuring scalability, security, and high performance, Collaborate with cross-functional teams, including product managers, engineers, and stakeholders, to align architecture with business goals, Advocate and implement microservices-based architecture leveraging Java and Spring Boot frameworks, Development and Coding: Actively participate in coding, design reviews, and solution implementation for critical components, Build and maintain highly efficient, reusable, and reliable code in Java with strong adherence to coding standards, DevOps and CI/CD: Drive the implementation of DevOps practices across teams, ensuring efficient CI/CD pipelines, Collaborate with operations teams to build, deploy, and manage infrastructure as code using modern DevOps tools ( e-g , Jenkins, GitLab, Docker, Kubernetes), Establish robust monitoring and alerting mechanisms to ensure platform reliability, Cloud and GCP Expertise: Architect cloud-native solutions leveraging GCP services such as GKE, Cloud Functions, BigQuery, and Pub/Sub, Guide teams in adopting GCP best practices for deployment, cost optimization, and security, Technical Strategy and Innovation: Evaluate emerging technologies, tools, and frameworks to drive innovation and maintain competitive advantages, Provide technical guidance, mentorship, and thought leadership to engineering teams, Quality Assurance: Ensure comprehensive testing strategies are integrated into the development lifecycle, Promote secure coding practices and ensure compliance with banking security standards, Collaboration and Communication: Act as the technical point of contact for stakeholders, clearly communicating architecture decisions, trade-offs, and recommendations, Drive collaboration across distributed teams in agile environments, Requirements To be successful in this role, you should meet the following requirements: Core Expertise: Proficiency in Java (11+) and Spring Boot, with a strong understanding of backend development, Hands-on experience with microservices architecture, RESTful APIs, and event-driven systems, Good understanding of client-side scripting and JavaScript frameworks like react , DevOps and CI/CD: Expertise in building CI/CD pipelines using tools like Jenkins, GitLab, or Azure DevOps, Hands-on knowledge of containerization (Docker) and orchestration (Kubernetes), Proficiency in monitoring and logging tools ( e-g , Prometheus, Grafana, ELK stack), Cloud Proficiency: Proven experience designing and deploying applications on Google Cloud Platform (GCP), Familiarity with GCP services such as GKE, Cloud Run, Cloud Functions, Pub/Sub, and BigQuery, Technical Leadership: Demonstrated ability to lead technical discussions, make architectural decisions, and mentor teams, Experience in defining and implementing architectural standards, coding guidelines, and best practices, Banking/FinTech Knowledge (Preferred): Experience in digital banking, payments, or financial services domains is a plus, Familiarity with regulatory compliance and security standards in banking, Soft Skills: Strong problem-solving skills and attention to detail, Excellent written and verbal communication skills, Ability to thrive in a fast-paced, collaborative environment, Youll achieve more when you join HSBC, hsbc /careers HSBC is committed to building a culture where all employees are valued, respected and opinions count We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website, Issued by HSBC Software Development India Show
Posted 4 days ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are looking for MLops Engineer@ Gurgaon Location Job Responsibilities: Design and implement CI/CD pipelines for machine learning workflows. Develop and maintain production-grade ML pipelines using tools like MLflow, Kubeflow, or Airflow. Automate model training, testing, deployment, and monitoring processes. Collaborate with Data Scientists to operationalize ML models, ensuring scalability and performance. Monitor deployed models for drift, degradation, and bias, and trigger retraining as needed. Maintain and improve infrastructure for model versioning, artifact tracking, and reproducibility. Integrate ML solutions with microservices/APIs using FastAPI or Flask. Work on containerized environments using Docker and Kubernetes. Implement logging, monitoring, and alerting for ML systems (e.g., Prometheus, Grafana). Champion best practices in code quality, testing, and documentation. Required Skills: 7+ years of experience in Python development and ML/AI-related engineering roles. Strong experience in ML Ops tools like MLflow, Kubeflow, Airflow, or similar. Deep understanding of Docker, Kubernetes, and container orchestration for ML workflows. Hands-on experience with cloud platforms (AWS, GCP, Azure) and infrastructure-as-code (Terraform/CDK). Familiarity with model deployment and serving frameworks (e.g., Seldon, TorchServe, TensorFlow Serving). Good understanding of DevOps practices and CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI). Experience with data versioning tools (e.g., DVC) and model lifecycle management. Exposure to monitoring tools for ML and infrastructure healt Experience:7-12 Yrs Job Location :Gurgaon Interested candidates can share your CV to mangani.paramanandhan@bounteous.com, i will call you shortly. Please share your CV.
Posted 4 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. Cloud Engineer for our public cloud operations team will focus on automating governance policies in our cloud environments. The goal is to enable self-service wherever possible without compromising security. The team is responsible for partnering with multiple stakeholders in framing and implementing governance policy frameworks for Cloud platforms primarily on AWS and also on OCI and GCP. What You’ll Be Doing... Governance team is primarily responsible for ensuring that the data and processes that are used in public cloud platforms are secured and controlled so that application workloads in those cloud platforms are not exposed to unintended users or services. Governance includes implementation of strict policies for managing users, roles, permissions and accounts, and ensuring enforcement and compliance of those policies, visibility into who is doing what and auditing what changes were made to the environment. Another aspect of Governance is periodic audit on resource utilization and terminating services that are under-utilized or non-compliant to organizational standards Designing and automate governance framework across our cloud environments with an emphasis on AWS. Automating & Maintaining Cloud Governance WebPortal to allow application and infra teams to generate reports and raise exception requests. Monitoring, logging, audits and automated policy enforcement for security and cost compliance. Ensuring services availability and continuity through proper response to incidents and requests. What We’re Looking For... You’ll need to have: Bachelor’s degree or one or more years of work experience. Core experience and knowledge on Python and django frameworks, Html, Angular and scripting languages. Good experience with Apache web server with Linux OS. One or more years of experience in building cloud platform architecture solutions on public and/or private cloud platforms with an emphasis towards governance/security tools. Hands-on knowledge in core AWS services like EC2, S3, EBS, ELB, AWS Lambda, CLI etc. and familiarity with AWS network services. Even Better To Have Master's degree. Cloud Certification. Experience in infrastructure and Cloud services with proficiency in automation using Python, ReactJS, Unix Shell and other scripting languages. Experience with modern source control repositories (e. g. Git) and devOps toolsets (Jenkins/ Ansible etc) and familiarity with Agile/ Scrum methodologies. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.
Posted 4 days ago
2.0 - 4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We are a team of 70 people now (with team members joining from Livspace, Gupshup, Setu, etc. ). This is a high-impact, critical role to craft the future roadmap of the organisation. And we're just getting started! Responsibilities Design and develop microservices that can work in a large-scale, multi-tenant environment. Explore design implications and work towards an appropriate balance between functionality, performance, and maintainability. Take ownership from the ideation phase to deployment and maintenance. Deploy and maintain applications in a secure AWS environment. Collaborate closely with cross-functional teams, including Design, Product, Data Science, and Analytics. Actively participate in the hiring process to bring world-class programmers to the team. Requirements 2-4 years of experience in server-side development. Strong programming skills in Java. Hands-on experience in API development and the Spring framework. Good understanding of SQL and NoSQL databases. Experience in test-driven development (writing unit tests and API tests). Understanding of basic cloud computing concepts and experience with any major cloud service provider (AWS/GCP/Azure). Ability to build and deploy applications in a containerized environment. Understanding of application logging and monitoring systems like Prometheus or Kibana. Curious to explore cutting-edge technologies and bake them into products. Zeal and drive to take end-to-end ownership. This job was posted by Sneha Hegde from Shopflo.
Posted 4 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. Cloud Engineer for our public cloud operations team will focus on automating governance policies in our cloud environments. The goal is to enable self-service wherever possible without compromising security. The team is responsible for partnering with multiple stakeholders in framing and implementing governance policy frameworks for Cloud platforms primarily on AWS and also on OCI and GCP. What You’ll Be Doing... Governance team is primarily responsible for ensuring that the data and processes that are used in public cloud platforms are secured and controlled so that application workloads in those cloud platforms are not exposed to unintended users or services. Governance includes implementation of strict policies for managing users, roles, permissions and accounts, and ensuring enforcement and compliance of those policies, visibility into who is doing what and auditing what changes were made to the environment. Another aspect of Governance is periodic audit on resource utilization and terminating services that are under-utilized or non-compliant to organizational standards Designing and automate governance framework across our cloud environments with an emphasis on AWS. Automating & Maintaining Cloud Governance WebPortal to allow application and infra teams to generate reports and raise exception requests. Monitoring, logging, audits and automated policy enforcement for security and cost compliance. Ensuring services availability and continuity through proper response to incidents and requests. What We’re Looking For... You’ll need to have: Bachelor’s degree or one or more years of work experience. Core experience and knowledge on Python and django frameworks, Html, Angular and scripting languages. Good experience with Apache web server with Linux OS. One or more years of experience in building cloud platform architecture solutions on public and/or private cloud platforms with an emphasis towards governance/security tools. Hands-on knowledge in core AWS services like EC2, S3, EBS, ELB, AWS Lambda, CLI etc. and familiarity with AWS network services. Even Better To Have Master's degree. Cloud Certification. Experience in infrastructure and Cloud services with proficiency in automation using Python, ReactJS, Unix Shell and other scripting languages. Experience with modern source control repositories (e. g. Git) and devOps toolsets (Jenkins/ Ansible etc) and familiarity with Agile/ Scrum methodologies. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.
Posted 4 days ago
7.0 years
0 Lacs
India
On-site
What You Will Do: ● Design, develop, operate, and maintain cutting-edge solutions which allow for reuse and extensibility while still solving the primary problem at scale ● Build solutions for multiple services, work in the code, and understand at a detailed level how the software works ● Propose and own initiatives to completion while balancing various technical trade offs including speed to delivery vs ongoing maintainability and others ● Provide high quality code reviews and scalable architectural designs ● Mentor new hires and other engineers to help them become more proficient by example, tech talks, paired programming, and other avenues to increase technical efficiency across the organization What You’ll Need ● Broad experience architecting and implementing highly available, distributed, data-intensive applications (7+ years or equivalent track-record) ● Experience building micro services including decoupling applications and services from monolithic systems ● Experience with asynchronous event streaming platforms (e.g. Kafka, PubSub) ● Strong working knowledge of SQL and NoSQL datastores ● Experience with front end framework (e.g. ReactJS) ● Experience with cloud technologies (e.g. GCP or AWS) ● Proficiency in programming languages (Java, PHP, Graphql) ● Experience with Test Automation and Test Authoring ● Experience with Continuous Integration (CI/CD) practices and tools (e.g. Buildkite, Jenkins etc.,) ● Experience leveraging monitoring and logging technologies (e.g. DataDog, Grafana) ● Excellent communication skills with demonstrated experience driving teams forward and ability to influence decision (Preferred) Track-record of technical leadership for teams following software development best practices (e.g. SOLID, TDD, GRASP, YAGNI). Superior organizational and analytical skills with hypothesis driven problem solving and turning data into actionable insights
Posted 4 days ago
7.0 years
0 Lacs
India
Remote
Join Tether and Shape the Future of Digital Finance At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our cutting-edge solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing the power of blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, all at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction. Innovate with Tether Tether Finance: Our innovative product suite features the world’s most trusted stablecoin, USDT , relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services. But that’s just the beginning: Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities. Tether Data: Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET , our flagship app that redefines secure and private data sharing. Tether Education : Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity. Tether Evolution : At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways. Why Join Us? Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry. If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you. Are you ready to be part of the future? About The Job We are seeking a highly skilled Lead DevOps Engineer to: Lead and guide a team of DevOps specialists Architect, implement, and help maintain CI/CD pipelines using GitHub Deploy and manage critical infrastructure The ideal candidate will need extensive experience with Docker, JavaScript package publishing to NPM, automating mobile app build processes, etc. to name a few. A deep expertise in Linux system administration and networking will ensure scalable, secure, and highly available deployments. Responsibilities Mentor and lead a team of DevOps specialists, promoting best practices, documentation, and knowledge sharing. Collaborate cross‑functionally (Dev, QA, Management etc.) to enhance deployment quality, observability, and stability. Implement monitoring, logging, alerting into systems to proactively detect issues and maintain system health. Design the architecture, implementation, and management of end-to-end CI/CD pipelines in GitHub Actions, ensuring rapid and reliable software delivery. Design and enforce test-driven deployment systems, integrating automated testing at every stage to maintain code quality and accelerate feedback loops. Oversee server system administration, including configuration, monitoring, patching, and troubleshooting. Keep up to date on industry trends and best practices, and evaluate and integrate new DevOps tools and processes. 7+ years in DevOps/Infrastructure roles, with at least 2-3 in a leadership/technical lead capacity. Expertise in containerization technologies—Docker image creation, registry management, and basic orchestration patterns. Hands-on experience managing JavaScript packages and publishing workflows to NPM, with a solid understanding of semantic versioning. Understanding of C++ build systems, specifically CMake, and experience optimizing native code pipelines using Github Actions. Strong Linux system administration and networking expertise, including shell scripting, package management, system performance troubleshooting, firewalls, and VPNs to secure and optimize deployments. Excellent leadership, problem-solving, and communication skills. Bachelor’s or Master’s degree in Computer Science, Engineering, or a related discipline. Important information for candidates Recruitment scams have become increasingly common. To protect yourself, please keep the following in mind when applying for roles: Apply only through our official channels. We do not use third-party platforms or agencies for recruitment unless clearly stated. All open roles are listed on our official careers page: https://tether.recruitee.com/ Verify the recruiter’s identity. All our recruiters have verified LinkedIn profiles. If you’re unsure, you can confirm their identity by checking their profile or contacting us through our website. Be cautious of unusual communication methods. We do not conduct interviews over WhatsApp, Telegram, or SMS. All communication is done through official company emails and platforms. Double-check email addresses. All communication from us will come from emails ending in @tether.to or @tether.io We will never request payment or financial details. If someone asks for personal financial information or payment at any point during the hiring process, it is a scam. Please report it immediately. When in doubt, feel free to reach out through our official website.
Posted 4 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We are seeking a highly skilled and hands-on Mobile & Web Developer with strong proficiency in Flutter and modern web frameworks to join our engineering team. The ideal candidate will bring experience from fast-paced, product-driven environmentspreferably within fintechand possess a deep understanding of secure, scalable, and high-performance application development. Responsibilities Build and maintain cross-platform mobile applications using Flutter and a modern JavaScript/TypeScript web framework (React/Svelte/Vue/etc). Integrate complex SDKs and APIs, especially those related to payments, authentication, and analytics. Ensure secure mobile development practices (TLS, encryption, secure storage). Implement and maintain CI/CD pipelines using Git-based workflows. Write and maintain end-to-end automation tests using Appium or Espresso. Collaborate closely with backend, QA, and product teams in an Agile environment. Take ownership of feature delivery, from design through deployment and monitoring. Requirements Languages & Frameworks: TypeScript/JavaScript, Flutter (hands-on), and any modern web framework (React/Svelte/Vue/etc. ) Testing: Basic experience with Appium or Espresso for E2E testing. CI/CD & Release: Familiarity with Git-based CI/CD pipelines and release processes. Debugging & Ownership: Strong problem-solving, debugging, and delivery ownership. Development Workflow: Agile environment experience and familiarity with monitoring/logging tools (e. g., OpenTelemetry). Exposure to Adobe Analytics (implementation, tagging). Experience with Firebase and backend-triggered mobile flows. Integration of Sentry for error monitoring. Exposure to secure code practices and security compliance in mobile apps. Fintech, Payments, or Banking domain experience. Experience building scalable apps for high-volume transactional systems. Comfortable working in cloud-native environments with microservices architecture. This job was posted by Ranjana S Bhushan from CAW Studios.
Posted 4 days ago
2.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description: CloudOps Engineer Who we are: Acqueon's conversational engagement software lets customer-centric brands orchestrate campaigns and proactively engage with consumers using voice, messaging, and email channels. Acqueon leverages a rich data platform, statistical and predictive models, and intelligent workflows to let enterprises maximize the potential of every customer conversation. Acqueon is trusted by 200 clients across industries to increase sales, drive proactive service, improve collections, and develop loyalty. At our core, Acqueon is a customer-centric company with a burning desire (backed by a suite of awesome, AI-powered technology) to help businesses provide friction-free, delightful, and referral-worthy customer experiences. Position Overview We are seeking a highly skilled CloudOps Engineer with expertise in Amazon Web Services (AWS) to join our team. The ideal candidate will be responsible for designing, implementing, and maintaining cloud infrastructure, SaaS Applications, ensuring high availability, scalability, and security. You will work collaboratively with development, operations, and security teams to automate deployment processes, optimize system performance, and drive operational excellence. As a Cloud Engineer in Acqueon you will need…. Ensure the highest uptime for customers in our SaaS environment Provision Customer Tenants & Manage SaaS Platform, Memos to the Staging and Production Environments Infrastructure Management: Design, deploy, and maintain secure and scalable AWS cloud infrastructure using services like EC2, S3, RDS, Lambda, and CloudFormation. Monitoring & Incident Response: Set up monitoring solutions (e.g., CloudWatch, Grafana) to detect, respond, and resolve issues quickly, ensuring uptime and reliability. Cost Optimization: Continuously monitor cloud usage and implement cost-saving strategies such as Reserved Instances, Spot Instances, and resource rightsizing. Backup & Recovery: Implement robust backup and disaster recovery solutions using AWS tools like AWS Backup, S3, and RDS snapshots. Security Compliance: Configure security best practices, including IAM policies, security groups, and encryption, while adhering to organizational compliance standards. Infrastructure as Code (IaC): Use Terraform, CloudFormation, or AWS CDK to provision, update, and manage infrastructure in a consistent and repeatable manner. Automation & Configuration Management: Automate manual processes and system configurations using Ansible, Python, or shell scripting. Containerization & Orchestration: Manage containerized applications using Docker and Kubernetes (EKS) for scaling and efficient deployment. Skills & Qualifications: 2-5 years of experience in Cloud Operations, Infrastructure Management, or DevOps Engineering. Deep expertise in AWS services (EC2, S3, RDS, VPC, Lambda, IAM, CloudFormation, etc.). Strong experience with Terraform for infrastructure provisioning and automation. Proficiency in scripting with Python, Bash, or PowerShell for cloud automation. Hands-on experience with monitoring and logging tools (AWS CloudWatch, Prometheus, Datadog, ELK Stack, etc.). Strong understanding of networking concepts, security best practices, IAM policies, and role-based access control (RBAC). Experience troubleshooting SaaS application performance, system reliability, and cloud-based service disruptions. Familiarity with containerization technologies (Docker, Kubernetes, AWS ECS, or EKS). Willingness to work in a 24/7 operational environment with rotational shifts. Preferred Qualifications: AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer). Experience with hybrid cloud environments and on-premises-to-cloud migrations. Familiarity with other cloud platforms like Azure or GCP. Knowledge of database management (e.g., RDS, DynamoDB) and caching solutions (e.g., Redis, ElastiCache). This is an excellent opportunity for those seeking to continue to build upon their existing skills. The right individual will be self-motivated and a creative problem solver. You should possess the ability to seek out the correct information efficiently through individual efforts and with the team. By joining the Acqueon team, you can enjoy the benefits of working for one of the industry's fastest growing and highly respected technology companies. If you, or someone you know, would be a great fit for us we would love to hear from you today! Use the form to apply today or submit your resume.
Posted 4 days ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Join Inito's DevOps team, playing a crucial role in building, maintaining, and scaling our cloud infrastructure and operational excellence. This role offers a unique opportunity to contribute across development and operations, streamlining processes, enhancing system reliability, and strengthening our security posture. You will work closely with engineering, data science, and other cross-functional teams in a fast-paced, growth-oriented environment. Responsibilities Assist in managing and maintaining cloud infrastructure on AWS, GCP, and on-premise compute (including bare-metal servers). Support and improve CI/CD pipelines, contributing to automated deployment processes. Contribute to automation efforts through scripting, reducing manual toil, and improving efficiency. Monitor system health and logs, assisting in troubleshooting and resolving operational issues. Develop a deep understanding of application working, including memory & disk usage patterns, database interactions, and overall resource consumption to ensure performance and stability. Participate in incident response and post-mortem analysis, contributing to faster resolution and preventing recurrence. Support the implementation and adherence to cloud security best practices (e. g., IAM, network policies). Assist in maintaining and evolving Infrastructure as Code (IaC) solutions. Requirements Cloud Platforms: At least 2 years of hands-on experience with Amazon Web Services (AWS) and/or Google Cloud Platform (GCP), including core compute, storage, networking, and database services (e. g., EC2 S3 VPC, RDS, GCE, GCS, Cloud SQL). On-Premise infrastructure: Setup, automation, and management. Operating Systems: Proficiency in Linux environments and shell scripting (Bash). Scripting/Programming: Foundational knowledge and practical experience with Python for automation. Containerization: Familiarity with Docker concepts and practical usage. Basic understanding of container orchestration concepts (e. g., Kubernetes). CI/CD: Understanding of Continuous Integration/Continuous Delivery principles and experience with at least one CI/CD tool (e. g., Jenkins, GitLab CI, CircleCI, GitHub Actions). Familiarity with build and release automation concepts. Version Control: Solid experience with Git for code management. Monitoring: Experience with basic monitoring and alerting tools (e. g., AWS CloudWatch, Grafana). Familiarity with log management concepts. Networking: Basic understanding of networking fundamentals (DNS, Load Balancers, VPCs). Infrastructure as Code (IaC): Basic understanding of Infrastructure as Code (IaC) principles. Good To Have Skills & Qualifications Cloud Platforms: Hands-on experience with both AWS and GCP. Hybrid & On-Premise Cloud Architectures: Hands-on experience with VMware vSphere / Oracle OCI or any on-premises infrastructure platform. Infrastructure as Code (IaC): Hands-on experience with Terraform or AWS CloudFormation. Container Orchestration: Hands-on experience with Kubernetes (EKS, GKE). Databases: Familiarity with PostgreSQL and Redis administration and optimization. Security Practices: Exposure to security practices like SAST/SCA or familiarity with IAM best practices beyond basics. Awareness of secrets management concepts (e. g., HashiCorp Vault, AWS Secrets Manager) and vulnerability management processes. Observability Stacks: Experience with centralized logging (e. g., ELK Stack, Loki) or distributed tracing (e. g., Jaeger, Zipkin, Tempo). Serverless: Familiarity with serverless technologies (e. g., AWS Lambda, Google Cloud Functions). On-call/Incident Management Tools: Familiarity with on-call rotation and incident management tools (e. g., PagerDuty). DevOps Culture: A strong passion for automation, continuous improvement, and knowledge sharing. Configuration Management: Experience with tools like Ansible for automating software provisioning, configuration management, and application deployment, especially in on-premise environments. Soft Skills Strong verbal and written communication skills, with an ability to collaborate effectively across technical and non-technical teams. Excellent problem-solving abilities and a proactive, inquisitive mindset. Eagerness to learn new technologies and adapt to evolving environments. Ability to work independently and contribute effectively as part of a cross-functional team. This job was posted by Ronald J from Inito.
Posted 4 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We're looking for a Senior Backend Engineer (SDE III) who can architect and build robust backend systems while also managing infrastructure and deployments. This is a hands-on role with full ownership from API design and database performance to cloud infrastructure and CI/CD automation. You'll collaborate across product, design, and frontend teams, while also mentoring junior developers and driving best practices. Responsibilities Design, develop, and maintain scalable backend services using a modern framework of your choice. Build well-structured APIs (REST or GraphQL) with robust authentication, authorization, and versioning. Define and evolve database schemas; optimize queries for performance and reliability. Use NoSQL databases (where required) for high-throughput or flexible data needs. Own infrastructure setup and manage deployments on cloud platformsthere is no separate DevOps team. Automate CI/CD workflows, containerize services using Docker, and maintain deployment pipelines. Ensure system performance, resilience, and observability through caching, queuing, and monitoring. Implement secure coding practices, including data encryption, access controls, and input validation. Debug and troubleshoot issues across the stack from the database to the API layer to production. Collaborate with cross-functional teams to define integration contracts and delivery timelines. Mentor and guide junior engineers, participate in code reviews, and lead architecture discussions. Requirements Strong hands-on experience with any modern backend framework (Node.js / RoR / Python Django / Spring Boot, etc. ). Proficiency in working with relational databases like PostgreSQL or MySQLschema design, joins, and indexing. Experience with NoSQL databases (e. g., MongoDB, Redis) where applicable to the system design. Strong understanding of API design principles, security (OAuth2 JWT), and error handling strategies. Hands-on experience with cloud infrastructure (AWS/ GCP, or Azure) and managing production environments. Proficient in containerization (Docker) and deployment automation using CI/CD pipelines. Experience with background processing, message queues, or event-driven systems. Familiarity with monitoring, logging, and alerting tools to ensure system health and reliability. Understanding of infrastructure management practicesbasic scripting, access control, and environment setup. Understanding of how different frontend / mobile components work and willingness to explore and work in them if required Ability to independently take features from concept to deployment with a focus on reliability and scalability. Experience mentoring developers and contributing to high-level technical decisions. This job was posted by Krishna Sharmathi from RootQuotient.
Posted 4 days ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are looking for a passionate and skilled Full Stack Developer with strong experience in React.js , Node.js , and AWS Lambda to build a custom enterprise platform that interfaces with a suite of SDLC tools. This platform will streamline tool administration, automate provisioning and deprovisioning of access, manage licenses, and offer centralized dashboards for governance and monitoring. Required Skills & Qualifications: 4–6 years of hands-on experience as a Full Stack Developer Proficient in React.js and component-based front-end architecture Strong backend experience with Node.js and RESTful API development Solid experience with AWS Lambda , API Gateway , DynamoDB , S3 , etc. Prior experience integrating and automating workflows for SDLC tools like: JIRA , Jenkins , GitLab , Bitbucket , GitHub , SonarQube , etc. Understanding of OAuth2, SSO, and API key-based authentications Familiarity with CI/CD pipelines, microservices, and event-driven architectures Strong knowledge of Git and modern development practices Good problem-solving skills, and ability to work independently Nice to Have: Experience with Infrastructure-as-Code (e.g., Terraform, CloudFormation) Experience with AWS EventBridge, Step Functions, or other serverless orchestration tools Knowledge of enterprise-grade authentication (LDAP, SAML, Okta) Familiarity with monitoring/logging tools like CloudWatch, ELK, or DataDog Job Roles and Responsibilities : Key Responsibilities: Design and develop intuitive front-end interfaces using React.js , ensuring seamless user experiences. Build robust backend services using Node.js and AWS Lambda , with integrations to external APIs (e.g., JIRA , Jenkins , GitLab , GitHub , SonarQube , etc.). Create secure, scalable REST APIs and event-driven services for tool license management and user access automation. Develop and integrate custom workflows for: License allocation & de-allocation Vendor resource onboarding Admin task automation (e.g., account creation, project config) Implement custom dashboards and reporting interfaces for usage, access, and compliance metrics. Collaborate with DevOps and Security teams to enforce secure API and cloud deployment practices. Write clean, maintainable code and participate in code reviews and design discussions. Troubleshoot issues and deliver fixes in a fast-paced enterprise environment. Document workflows, APIs, and architectural decisions.
Posted 4 days ago
11.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Responsibilities Define and drive the overall architecture for scalable, secure, and high-performance distributed systems. Write and review code for critical modules and performance-sensitive components to set quality and architectural standards. Collaborate with engineering leads and product managers to align technology strategy with business goals. Evaluate and recommend tools, technologies, and processes to ensure the highest quality product platform. Own and evolve the system design, ensuring modularity, multi-tenancy, and future extensibility. Establish and govern best practices around service design, API development, security, observability, and performance. Review code, designs, and technical documentation, ensuring adherence to architecture and design principles. Lead design discussions and mentor senior and mid-level engineers to improve design thinking and engineering quality. Partner with DevOps to optimise CI/CD, containerization, and infrastructure-as-code Stay abreast of industry trends and emerging technologies, assessing their relevance and value. Requirements Strong understanding of data structures and algorithms, and a minimum of 11 years of experience. Good knowledge of low-level and high-level system designs and best practices Strong expertise in Java & Spring Boot, with a deep understanding of microservice architectures and design patterns. Good knowledge of databases (both **SQL** and **NoSQL**), including schema design, sharding, and performance tuning. Expertise in **Kubernetes, Helm, and container orchestration** for deploying and managing scalable applications. Advanced knowledge of **Kafka** for stream processing, event-driven architecture, and data integration. Proficiency in **Redis** for caching, session management, and pub-sub use cases. Solid understanding of API design (REST/gRPC), authentication (OAuth2/JWT), and security best practices. Strong grasp of system design fundamentalsscalability, reliability, consistency, and observability. Experience with monitoring and logging frameworks (e. g. Datadog, Prometheus, Grafana, ELK, or equivalent). Excellent problem-solving, communication, and cross-functional leadership skills. Prior experience in leading architecture for SaaS or high-scale multi-tenant platforms is highly desirable. This job was posted by Shivansh Prakash Srivastava Talent Acq from GreyOrange.
Posted 4 days ago
2.0 - 4.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
We are looking for a highly skilled and hands-on Senior Data Engineer to join our growing data engineering practice in Mumbai. This role requires deep technical expertise in building and managing enterprise-grade data pipelines, with a primary focus on Amazon Redshift, AWS Glue, and data orchestration using Airflow or Step Functions. You will be responsible for building scalable, high-performance data workflows that ingest and process multi-terabyte-scale data across complex, concurrent environments. The ideal candidate is someone who thrives in solving performance bottlenecks, has led or participated in data warehouse migrations (e. g., Snowflake to Redshift), and is confident in interfacing with business stakeholders to translate requirements into robust data solutions. Responsibilities Design, develop, and maintain high-throughput ETL/ELT pipelines using AWS Glue (PySpark), orchestrated via Apache Airflow or AWS Step Functions. Own and optimize large-scale Amazon Redshift clusters and manage high concurrency workloads for a very large user base: Lead and contribute to migration projects from Snowflake or traditional RDBMS to Redshift, ensuring minimal downtime and robust validation. Integrate and normalize data from heterogeneous sources, including REST APIs, AWS Aurora (MySQL/Postgres), streaming inputs, and flat files. Implement intelligent caching strategies, leverage EC2 and serverless compute (Lambda, Glue) for custom transformations and processing at scale. Write advanced SQL for analytics, data reconciliation, and validation, demonstrating strong SQL development and tuning experience. Implement comprehensive monitoring, alerting, and logging for all data pipelines to ensure reliability, availability, and cost optimization. Collaborate directly with product managers, analysts, and client-facing teams to gather requirements and deliver insights-ready datasets. Champion data governance, security, and lineage, ensuring data is auditable and well-documented across all environments. Requirements 2-4 years of core data engineering experience, especially focused on Amazon Redshift hands-on performance tuning and large-scale management capacity. Demonstrated experience handling multi-terabyte Redshift clusters, concurrent query loads, and managing complex workload segmentation and queue priorities. Strong experience with AWS Glue (PySpark) for large-scale ETL jobs. Solid understanding and implementation experience of workflow orchestration using Apache Airflow or AWS Step Functions. Strong proficiency in Python, advanced SQL, and data modeling concepts. Familiarity with CI/CD pipelines, Git, DevOps processes, and infrastructure-as-code concepts. Experience with Amazon Athena, Lake Formation, or S3-based data lakes. Hands-on participation in Snowflake, BigQuery, or Teradata migration projects. AWS Certifications such as: AWS Certified Data Analytics - Specialty. AWS Certified Solutions Architect - Associate/Professional. Exposure to real-time streaming architectures or Lambda architectures. Soft Skills & Expectations Excellent communication skills enable able to confidently engage with both technical and non-technical stakeholders, including clients. Strong problem-solving mindset and a keen attention to performance, scalability, and reliability. Demonstrated ability to work independently, lead tasks, and take ownership of large-scale systems. Comfortable working in a fast-paced, dynamic, and client-facing environment. This job was posted by Rituza Rani from Oneture Technologies.
Posted 4 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Who We Are Zinnia is the leading technology platform for accelerating life and annuities growth. With innovative enterprise solutions and data insights, Zinnia simplifies the experience of buying, selling, and administering insurance products. All of which enables more people to protect their financial futures. Our success is driven by a commitment to three core values: be bold, team up, deliver value – and that we do. Zinnia has over $180 billion in assets under administration, serves 100+ carrier clients, 2500 distributors and partners, and over 2 million policyholders. Who You Are We are looking for a Network Security Engineer with deep Zscaler expertise and a well- rounded background in cloud and enterprise networking. This role centers on the deployment, administration, and optimization of Zscaler Internet Access (ZIA) and Zscaler Private Access (ZPA), while also supporting broader network security initiatives, including Cisco Meraki infrastructure and AWS cloud networking. The ideal candidate is a hands-on engineer with strong security instincts, a problem-solving mindset, and experience managing both on-prem and cloud-based networking environments. What You’ll Do Lead the design, deployment, and optimization of Zscaler ZIA and ZPA across the enterprise. Manage Zscaler policies, including SSL inspection, URL filtering, access control, and zero trust configuration. Serve as the subject matter expert for Zscaler, owning integrations, troubleshooting, and escalations. Configure and support Cisco Meraki networking hardware, including firewalls, switches, and wireless infrastructure. Architect and manage AWS networking, including VPC design, Transit Gateway, NACLs, Security Groups, and routing. Develop and implement network segmentation and secure access strategies using both cloud and on-prem tools. Create and maintain detailed network documentation, runbooks, and security standards. Automate and streamline network management tasks using IaC tools or scripting (e.g., Python, Terraform). Collaborate with security, cloud, and infrastructure teams to enforce zero trust architecture and data protection standards. Monitor, analyze, and respond to network threats, alerts, and performance issues. Evaluate and implement new tools or services to enhance network security and visibility. What You’ll Need 5+ years of experience in network engineering or security roles, with at least 2+ years hands-on with Zscaler platforms (ZIA, ZPA, SIPA). Strong experience configuring and managing Cisco Meraki networking devices in a distributed enterprise environment. Solid understanding of AWS networking principles and services (e.g., VPCs, Route 53, Direct Connect). Proficiency in network security concepts, including VPN, DNS security, zero trust, and endpoint integration. Experience working with firewall policies, network segmentation, and cloud security architecture. Familiarity with network troubleshooting tools (e.g., Wireshark, packet capture, logging platforms). Scripting experience for automation and orchestration is a plus (e.g., Python, Bash, Terraform). Excellent communication and collaboration skills; ability to work cross-functionally. Capable of leading projects, documenting processes, and mentoring junior team members. Certifications (Preferred But Not Required) Zscaler Certified Cloud Professional (ZCCP-IA / ZCCP-PA) Cisco Meraki CMNA or other Cisco certifications AWS Certified Advanced Networking – Specialty Certified Information Systems Security Professional (CISSP) or equivalent WHAT’S IN IT FOR YOU? We’re looking for the best and brightest innovators in the industry to join our team. At Zinnia, you collaborate with smart, creative professionals who are dedicated to delivering cutting-edge technologies, deeper data insights, and enhanced services to transform how insurance is done. Visit our website at www.zinnia.com for more information. Apply by completing the online application on the careers section of our website. We are an Equal Opportunity employer committed to a diverse workforce. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability.
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough