Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 years
0 Lacs
Delhi
Remote
Join Tether and Shape the Future of Digital Finance At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our cutting-edge solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing the power of blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, all at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction. Innovate with Tether Tether Finance: Our innovative product suite features the world’s most trusted stablecoin, USDT , relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services. But that’s just the beginning: Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities. Tether Data: Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET , our flagship app that redefines secure and private data sharing. Tether Education : Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity. Tether Evolution : At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways. Why Join Us? Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry. If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you. Are you ready to be part of the future? About the job We are seeking a highly skilled Lead DevOps Engineer to: Lead and guide a team of DevOps specialists Architect, implement, and help maintain CI/CD pipelines using GitHub Deploy and manage critical infrastructure The ideal candidate will need extensive experience with Docker, JavaScript package publishing to NPM, automating mobile app build processes, etc. to name a few. A deep expertise in Linux system administration and networking will ensure scalable, secure, and highly available deployments. Responsibilities Mentor and lead a team of DevOps specialists, promoting best practices, documentation, and knowledge sharing. Collaborate cross‑functionally (Dev, QA, Management etc.) to enhance deployment quality, observability, and stability. Implement monitoring, logging, alerting into systems to proactively detect issues and maintain system health. Design the architecture, implementation, and management of end-to-end CI/CD pipelines in GitHub Actions, ensuring rapid and reliable software delivery. Design and enforce test-driven deployment systems, integrating automated testing at every stage to maintain code quality and accelerate feedback loops. Oversee server system administration, including configuration, monitoring, patching, and troubleshooting. Keep up to date on industry trends and best practices, and evaluate and integrate new DevOps tools and processes. 7+ years in DevOps/Infrastructure roles, with at least 2-3 in a leadership/technical lead capacity. Expertise in containerization technologies—Docker image creation, registry management, and basic orchestration patterns. Hands-on experience managing JavaScript packages and publishing workflows to NPM, with a solid understanding of semantic versioning. Understanding of C++ build systems, specifically CMake, and experience optimizing native code pipelines using Github Actions. Strong Linux system administration and networking expertise, including shell scripting, package management, system performance troubleshooting, firewalls, and VPNs to secure and optimize deployments. Excellent leadership, problem-solving, and communication skills. Bachelor’s or Master’s degree in Computer Science, Engineering, or a related discipline. Important information for candidates Recruitment scams have become increasingly common. To protect yourself, please keep the following in mind when applying for roles: Apply only through our official channels. We do not use third-party platforms or agencies for recruitment unless clearly stated. All open roles are listed on our official careers page: https://tether.recruitee.com/ Verify the recruiter’s identity. All our recruiters have verified LinkedIn profiles. If you’re unsure, you can confirm their identity by checking their profile or contacting us through our website. Be cautious of unusual communication methods. We do not conduct interviews over WhatsApp, Telegram, or SMS. All communication is done through official company emails and platforms. Double-check email addresses. All communication from us will come from emails ending in @ tether.to or @ tether.io We will never request payment or financial details. If someone asks for personal financial information or payment at any point during the hiring process, it is a scam. Please report it immediately. When in doubt, feel free to reach out through our official website.
Posted 3 days ago
2.0 years
2 - 8 Lacs
Mohali
On-site
We are seeking a DevOps Engineer with strong experience in CI/CD pipelines, cloud infrastructure, automation, and networking . The ideal candidate will ensure seamless deployment, high system reliability, and secure networking practices. Key Responsibilities: Design, build, and maintain CI/CD pipelines (e.g., Jenkins, GitLab CI) Automate infrastructure provisioning using tools like Terraform, Ansible, etc. Manage and optimize cloud infrastructure (AWS, Azure, GCP) Implement and manage containerized applications using Docker and Kubernetes Monitor system performance, availability, and security Configure and manage internal networks, VPNs, firewalls, and load balancers Troubleshoot networking issues and ensure minimal downtime Maintain network documentation and ensure adherence to security standards Collaborate with developers and QA to support smooth deployments and scalability Implement system monitoring, alerting, and logging (e.g., Prometheus, Grafana, ELK stack) Required Skills and Qualifications: 2–5 years of experience as a DevOps Engineer or similar role Hands-on experience with cloud platforms and infrastructure-as-code tools Strong scripting skills (Bash, Shell, Python, etc.) Solid understanding of computer networking (TCP/IP, DNS, VPN, firewalls) Experience with containerization and orchestration (Docker, Kubernetes) Familiarity with Linux/Unix-based systems Good understanding of network protocols and troubleshooting tools Preferred Qualifications: Bachelor’s degree in Computer Science, Information Technology, or related field Certifications in AWS/Azure/GCP or networking (CCNA, etc.) are a plus Job Type: Full-time Pay: ₹17,776.87 - ₹69,135.46 per month Work Location: In person Speak with the employer +91 9872235857
Posted 3 days ago
0 years
5 Lacs
Mohali
Remote
Hope you all are doing great..... dipoletechi .... is hiring candidates for the profile IT Support Executive Experience : 1yr - 4yrs Salary :- As Per Company Norms Location: Mohali, Phase 8b The Service Desk’s goals include: Providing a single point of contact for end-user issues Facilitating the restoration of normal service operation while minimizing impact to the end-user Delivering services within agreed-upon SLA’s Service Desk’s duties include but are not limited to: Provide remote and onsite desktop, laptop, server, and network problem management and resolution services to clients and end-users via Company’s communications and remote/on-site support solutions, processes, and procedures Identify, document, prioritize, troubleshoot, and escalate service requests per Company’s problem management and resolution processes and SLAs Perform proactive maintenance of client and end-user hardware, software, and services per Company’s established processes and best practices Perform routine server maintenance and health checks in line with documented maintenance schedules Check and remediate failed backup jobs and escalate to appropriate resources when necessary Monitor and respond to RMM alerts according to company priority and escalation protocols Coordinate with vendors for support, repairs, RMAs, or escalations as necessary for timely service delivery Maintain and pursue I.T. training competencies and certifications per Company’s established training schedule and requirements Maintain Company standards for client satisfaction, utilization, and compliance policies Utilize Company’s PSA and RMM solutions per Company’s established processes to deliver maintenance and problem management and resolution services to clients and end-users Interface with clients, end-users, and vendor support resources as needed to deliver services within established SLAs Maintain communication with all affected parties during problem management and resolution per Company’s established processes and procedures Competencies Required: PC/Laptop issues, IE, Windows, Workstation Software installs Resolve PC Internet connectivity issues Peripheral Device connectivity Smart phone email integration Virus Removal and Cleanup VPN connectivity, remote worker connectivity Email client connectivity support MS Office suite support Follow all scripts/procedures Restart services, verify log files, backup incident logging Deploy monitoring agents Remote troubleshooting Light dispatching Interfacing with vendors and manufacturer’s service support Basic server administration and maintenance Backup monitoring and basic remediation steps Alert interpretation and ticket generation from RMM systems Network monitoring Exceptional customer service and communication skills Assist project managers, engineers, and staff as needed Ability to acquire the following Certifications: MCTS (Windows 10) Day-to-Day Service Delivery The Service Desk Engineer’s daily duties are determined by their Service Desk Manager, whose responsibilities include managing the N-Central Monitoring Solution and the Service Desk, and ensuring proper prioritization and assignment of all Service Requests. Depending on staffing and client load, some engineers may be dedicated to N-Central monitoring and alert response. The scheduling of remote and onsite work is coordinated by the Service Manager or Dispatcher. The Service Manager is ultimately responsible for ensuring SLAs are maintained. A typical day includes: Logging in to the CRM and RMM systems Reviewing newly-assigned and open Service Requests Monitoring RMM alerts and addressing or escalating as appropriate Reviewing backup reports and remediating failed jobs or escalating as needed Performing server maintenance tasks and logging actions accordingly Working tickets in order of priority and within SLA requirements Contacting clients/end-users to collect issue details and begin resolution Documenting issue resolution steps and verifying user satisfaction Escalating issues that fall outside Tier I or SLA thresholds Following up on completed Service Requests within 24 hours to ensure resolution and customer satisfaction *Interested candidates can Share their resume at hr(at)dipoletechi.com *For more details call:- 9517770049 * References are highly appreciated. Job Type: Full-time Pay: Up to ₹500,000.00 per year Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Language: English (Required) Location: Mohali, Punjab (Required) Shift availability: Night Shift (Required) Work Location: In person
Posted 3 days ago
3.0 years
0 Lacs
India
Remote
Thinkgrid Labs is at the forefront of innovation in custom software development. Our expert team of software engineers, architects, and UI/UX designers specialises in crafting bespoke web, mobile, and cloud applications, along with AI solutions and intelligent bots. Serving a diverse range of industries, we have a global client base across five continents. Our commitment to quality and passion for technological advancement drive us to push boundaries and set new standards. We're expanding our team with smart and creative individuals who are passionate about building high-performance, user-friendly, flexible, and maintainable software. We are hiring a Health Information Exchange (HIE) Software Engineer to work on projects for clients outside of India, so excellent oral and written communication skills are a must. Job Title : Health Information Exchange (HIE) Software Engineer Location : Remote Working Hours : 3 PM IST to 12 AM IST Experience Required : Minimum 3 years Education : Bachelor’s or Master’s degree in Computer Science or Health Informatics Who you are: HIE Standards Specialist: Deep, practical knowledge of IHE profiles and ITI transactions—PIX/PDQ, XDS.b, XCA, XCDR/XCT, XCPD, XDW—and familiarity with HL7 v2/v3, CDA, and FHIR. Integration Engineer: Proven experience building and securing SOAP and RESTful services, handling message transformation (Mirth Connect, Iguana, Apache Camel, or similar), and integrating with EMR/EHR systems. Master Patient Index (MPI) Pro: Hands-on experience implementing or integrating enterprise/clinical MPIs, probabilistic or deterministic matching algorithms, and patient de-duplication strategies. Cloud-Native Developer: Proficient in one or more modern stacks—Java/Spring Boot, .NET Core, Node.js/TypeScript, or Python/FastAPI—with microservices architecture, containerisation (Docker, Kubernetes), and deployments on AWS / Azure / GCP. Security & Compliance Aficionado: Working knowledge of HIPAA, CMS, ONC Certification criteria, TEFCA, OAuth 2.0/OIDC, and TLS/MTLS for secure data exchange. Quality Champion: Comfortable with IHE Gazelle, NIST XDS tools, Touchstone, or similar test harnesses to validate conformance and performance. Problem Solver & Team Player: Thrive in an agile, distributed, cross-functional environment; able to communicate complex technical ideas clearly to non-technical stakeholders. Passionate & Humble: Enthusiastic about improving healthcare data exchange and willing to learn continuously while empowering teammates. What you will be doing: Design & Architecture: Define HIE solution architectures, data models, and APIs that implement IHE ITI profiles (PIX/PDQ, XDS.b, XCA, XCPD, XCDR, etc.)—including security, scalability, and high availability considerations. Development & Integration: Build and maintain services, adapters, and orchestration workflows to ingest, store, query, and retrieve clinical documents and images across disparate systems. Implement enterprise or federated MPI services with robust patient-matching logic and reconciliation workflows. Standards Conformance & Validation: Configure and execute automated test suites using Gazelle EVS Client, NIST validators, Inferno, or custom Postman collections to ensure full IHE/HL7 compliance. Performance Optimisation & Monitoring: Profile message throughput, tweak database indexes (SQL/NoSQL), and fine-tune document repository/registry performance; set up dashboards (Prometheus/Grafana, CloudWatch, or Azure Monitor). DevOps & CI/CD: Automate build, test, and deployment pipelines (GitHub Actions, Azure DevOps, Jenkins, or GitLab CI) and manage infrastructure as code (Terraform, CloudFormation). Security & Compliance: Enforce role-based access controls, audit logging, encryption in transit/at rest, and risk mitigation strategies aligned with HIPAA and ISO 27001 standards. Documentation & Knowledge Sharing: Produce technical design docs, sequence diagrams, data-flow diagrams, and API specs; guide junior engineers and collaborate closely with QA, analysts, and customer teams. Continuous Improvement: Stay current with evolving IHE profiles (e.g., Mobile Health Document Sharing), FHIR-based exchange initiatives, and industry best practices; recommend enhancements to keep our HIE offerings cutting-edge. Benefits 5 day work week (unless for rare emergencies) 100 % remote setup with flexible work culture and international exposure Opportunity to work on mission-critical healthcare projects impacting providers and patients globally
Posted 3 days ago
0 years
0 Lacs
India
Remote
About Company Our client is a trusted global innovator of IT and business services. We help clients transform through consulting, industry solutions, business process services, digital & IT modernization and managed services. Our client enables them, as well as society, to move confidently into the digital future. We are committed to our clients’ long-term success and combine global reach with local client attention to serve them in over 50 countries around the globe. Job Title: Python Developer with Azure & AKS Location: Noida / Remote Experience: 7+ yrs Job Type : Contract to hire Notice Period:- Immediate joiner Mandatory Skills · Hands-on experience with Python Developer with Azure & AKS. Hands-on experience with Azure Kubernetes Service (AKS) — deploying, managing, and troubleshooting applications on AKS. Strong knowledge of containerisation using Docker and orchestration using Kubernetes with Python. Familiarity with Azure services like Azure Blob Storage, Azure Functions, Azure Service Bus, Azure Key Vault, etc. Experience in implementing CI/CD pipelines using Azure DevOps, GitHub Actions, or similar tools. Knowledge of infrastructure such as code (IaC) tools like Terraform, Bicep, or ARM templates. Familiarity with monitoring and logging tools in Azure — e.g., Application Insights, Log Analytics, and Azure Monitor. Understanding cloud security, networking, and resource management best practices in a production Azure environment. Experience working in DevOps-enabled teams following Agile and iterative development. Responsibilities Writing clean, high-quality, high-performance, maintainable code Develop and support software including applications, database integration, interfaces, and new functionality enhancements Coordinate cross-functionally to insure project meets business objectives and compliance standards Support test and deployment of new products and features Participate in code reviews. Qualifications Bachelor's degree in Computer Science (or related field)
Posted 3 days ago
12.0 years
0 Lacs
Noida
On-site
About Aeris: For more than three decades, Aeris has been a trusted cellular IoT leader enabling the biggest IoT programs and opportunities across Automotive, Utilities and Energy, Fleet Management and Logistics, Medical Devices, and Manufacturing. Our IoT technology expertise serves a global ecosystem of 7,000 enterprise customers and 30 mobile network operator partners, and 80 million IoT devices across the world. Aeris powers today’s connected smart world with innovative technologies and borderless connectivity that simplify management, enhance security, optimize performance, and drive growth. Built from the ground up for IoT and road-tested at scale, Aeris IoT Services are based on the broadest technology stack in the industry, spanning connectivity up to vertical solutions. As veterans of the industry, we know that implementing an IoT solution can be complex, and we pride ourselves on making it simpler. Our company is in an enviable spot. We’re profitable, and both our bottom line and our global reach are growing rapidly. We’re playing in an exploding market where technology evolves daily and new IoT solutions and platforms are being created at a fast pace. A few things to know about us: We put our customers first . When making decisions, we always seek to do what is right for our customer first, our company second, our teams third, and individual selves last. We do things differently. As a pioneer in a highly competitive industry that is poised to reshape every sector of the global economy, we cannot fall back on old models. Rather, we must chart our own path and strive to out-innovate, out-learn, out-maneuver and out-pace the competition on the way. We walk the walk on diversity. We’re a brilliant and eclectic mix of ethnicities, religions, industry experiences, sexual orientations, generations and more – and that’s by design. We see diverse perspectives as a core competitive advantage. Integrity is essential. We believe in doing things well – and doing them right. Integrity is a core value here: you’ll see it embodied in our staff, our management approach and growing social impact work (we have a VP devoted to it). You’ll also see it embodied in the way we manage people and our HR issues: we expect employees and managers to deal with issues directly, immediately and with the utmost respect for each other and for the Company. We are owners. Strong managers enable and empower their teams to figure out how to solve problems. You will be no exception, and will have the ownership, accountability and autonomy needed to be truly creative. Job Title: Senior Oracle Database Administrator (DBA) – GCP Location: Noida, India We are seeking a highly skilled and experienced Senior Oracle DBA to manage and maintain our critical Oracle 12c, 18c, 19c, 21c single instance with DG and RAC databases, hosted on Google Cloud Platform (GCP). The ideal candidate will possess deep expertise in Oracle database administration, including installation, configuration, patching, performance tuning, security, and backup/recovery strategies within a cloud environment. They will also have expertise and experience optimizing the underlying operating system and database parameters for maximum performance and stability. Responsibilities: Database Administration: Install, configure, and maintain Oracle 12c, 18c, 19c, 21c single instance with DG and RAC databases on GCP Compute Engine. Implement and manage Oracle Data Guard for high availability and disaster recovery, including switchovers, failovers, and broker configuration. Perform database upgrades, patching, and migrations. Develop and implement backup and recovery strategies, including RMAN configuration and testing. Monitor database performance and proactively identify and resolve performance bottlenecks. Troubleshoot database issues and provide timely resolution. Implement and maintain database security measures, including user access control, auditing, and encryption. Automate routine database tasks using scripting languages (e.g., Shell, Python, PL/SQL). Create and maintain database documentation. Database Parameter Tuning: In-depth knowledge of Oracle database initialization parameters and their impact on performance, with a particular focus on memory management parameters. Expertise in tuning Oracle memory structures (SGA, PGA) for optimal performance in a GCP environment. This includes: Precisely sizing the SGA components (Buffer Cache, Shared Pool, Large Pool, Java Pool, Streams Pool) based on workload characteristics and available GCP Compute Engine memory resources. Optimizing PGA allocation (PGA_AGGREGATE_TARGET, PGA_AGGREGATE_LIMIT) to prevent excessive swapping and ensure efficient SQL execution. Understanding the interaction between SGA and PGA memory regions and how they are affected by GCP instance memory limits. Tuning the RESULT_CACHE parameters for optimal query performance, considering the available memory and workload patterns. Proficiency in using Automatic Memory Management (AMM) and Automatic Shared Memory Management (ASMM) features and knowing when manual tuning is required for optimal results. Knowledge of how GCP instance memory limits can impact Oracle's memory management and the appropriate adjustments to make. Experience with analysing AWR reports and identifying areas for database parameter optimization, with a strong emphasis on identifying memory-related bottlenecks (e.g., high buffer busy waits, excessive direct path reads/writes). Proficiency in tuning SQL queries using tools like SQL Developer and Explain Plan, particularly identifying queries that consume excessive memory or perform inefficient memory access patterns. Knowledge of Oracle performance tuning methodologies and best practices, specifically as they apply to memory management in a cloud environment. Experience with database indexing strategies and index optimization, understanding the impact of indexes on memory utilization. Solid understanding of Oracle partitioning and its benefits for large databases, including how partitioning can affect memory usage and query performance. Ability to perform proactive performance tuning based on workload analysis and trending, with a focus on memory usage patterns and potential memory-related performance issues. Expertise in diagnosing and resolving memory leaks or excessive memory consumption issues within the Oracle database. Deep understanding of how shared memory segments are managed within the Linux OS on GCP Compute Engine and how to optimize them for Oracle. Data Guard Expertise: Deep understanding of Oracle Data Guard architectures (Maximum Performance, Maximum Availability, Maximum Protection). Expertise in configuring and managing Data Guard broker for automated switchovers and failovers. Experience in troubleshooting Data Guard issues and ensuring data consistency. Knowledge of Data Guard best practices for performance and reliability. Proficiency in performing Data Guard role transitions (switchover, failover) with minimal downtime. Experience with Active Data Guard is a plus. Operating System Tuning: Deep expertise in Linux operating systems (e.g., Oracle Linux, Red Hat, CentOS) and their interaction with Oracle databases. Performance tuning of the Linux operating system for optimal Oracle database performance, including: Kernel parameter tuning (e.g., shared memory settings, semaphores, file descriptor limits). Memory management optimization (e.g., HugePages configuration). I/O subsystem tuning (e.g., disk scheduler selection, filesystem optimization). Network configuration optimization (e.g., TCP/IP parameters). Monitoring and analysis of OS performance metrics using tools like vmstat, iostat, top, and sar. Identifying and resolving OS-level resource contention issues (CPU, memory, I/O). Good to Have: GCP Environment Management: Provision and manage GCP Compute Engine instances for Oracle databases, including selecting appropriate instance types and storage configurations. Configure and manage GCP networking components (VPCs, subnets, firewalls) for secure database access. Utilize GCP Cloud Monitoring and Logging for database monitoring and troubleshooting. Implement and manage GCP Cloud Storage for database backups. Experience with Infrastructure as Code (IaC) tools like Terraform or Cloud Deployment Manager to automate GCP resource provisioning. Cost optimization of Oracle database infrastructure on GCP. Other Products and Platforms Experience with other cloud platforms (AWS, Azure). Experience with NoSQL databases. Experience with Agile development methodologies. Experience with DevOps practices and tools (e.g., Ansible, Chef, Puppet). Experience with GoldenGate. Qualifications: Bachelor's degree in Computer Science or a related field. Minimum 12+ years of experience as an Oracle DBA. Proven experience managing Oracle 12c, 18c, 19c, and 21c single instance with DG and RAC databases in a production environment, with strong Data Guard expertise. Extensive experience with Oracle database performance tuning, including OS-level and database parameter optimization. Hands-on experience with Oracle databases hosted on Google Cloud Platform (GCP). Strong understanding of Linux operating systems. Excellent troubleshooting and problem-solving skills. Strong communication and collaboration skills. Oracle Certified Professional (OCP) certification is highly preferred. GCP certifications (e.g., Cloud Architect, Cloud Engineer) are a plus. Aeris may conduct background checks to verify the information provided in your application and assess your suitability for the role. The scope and type of checks will comply with the applicable laws and regulations of the country where the position is based. Additional detail will be provided via the formal application process. Aeris walks the walk on diversity. We’re a brilliant mix of varying ethnicities, religions, cultures, sexual orientations, gender identities, ages and professional/personal/military experiences – and that’s by design. Diverse perspectives are essential to our culture, innovative process and competitive edge. Aeris is proud to be an equal opportunity employer. VV8Ow9JB0S
Posted 3 days ago
5.0 - 7.0 years
0 Lacs
Noida
On-site
Job Information Date Opened 29/07/2025 Job Type Full time Industry Technology Work Experience 5-7 years City Noida Province Uttar Pradesh Country India Postal Code 201303 Job Description Key Responsibilities: Design, implement, and manage scalable, secure, and reliable cloud infrastructure on Azure Perform regular system monitoring, verify the integrity and availability of all cloud-based resources, and troubleshoot issues as needed Automate and streamline operations and processes using DevOps tools and methodologies, including Jenkins Collaborate with development teams to ensure seamless integration and continuous delivery Manage and optimize performance, utilization, and costs in the Azure cloud environment Conduct root cause analysis for incidents, identify and implement corrective actions to prevent future occurrences Ensure compliance with security policies, standards, and best practices Requirements Required Skills and Qualifications: Extensive experience with Azure cloud services including compute, storage, networking, and security Proficiency in scripting and automation using tools like PowerShell, Shell Scripts, Crons, Azure CLI, ARM templates, Terraform, and Ansible Strong understanding of DevOps practices, including CI/CD pipelines with Jenkins, version control (e.g., Git), and configuration management Experience with monitoring and logging tools such as Graylog, Nagios, and Azure Monitor Excellent troubleshooting skills with a systematic approach to problem-solving Hands-on experience with Linux systems Familiarity with network analysis tools like WireShark Knowledge of security tools such as Vault Python proficiency is a plus Ability to work in a fast-paced, collaborative environment and manage multiple priorities Strong communication and interpersonal skills Educational Background: Bachelor’s degree in Computer Science, Information Technology, or a related field Preferred Qualifications: Azure certifications (e.g., Azure Administrator, Azure DevOps Engineer) Experience with containerization technologies such as Docker and Kubernetes Knowledge of other cloud platforms like AWS or Google Cloud is a plus Shift Details: Open to rotational shifts with 12x7 support 5 working days a week, with 9-hour shifts
Posted 3 days ago
6.0 years
0 Lacs
India
Remote
Requirement :- Lead Software Developer (Mandatory Skills are Reactjs, Nodejs, and any database) Exp: 6 to 15 yrs Job Location: WFH Position: Permanent Working Shifts: Afternoon Shift (1:00 pm IST to 10:00 pm IST) Interview Process: 2 Technical Rounds will be through Google Meet Video Call. Require Skills & Experience :- * 6 years experience in React.JS (must) * 6 years experience in Node.JS (must) * 6 years experience in ANY DATABASE (must) Ability to communicate effectively verbally and in writing in English. Experience in working in a product team and working collaboratively across various cross-functional teams Strong understanding of object-oriented programming concepts, designs, and best practices Hands-on experience in full stack development, developing consumer-facing software applications Experience designing and developing with rich JavaScript frameworks such as React js, Node js, .Net, .Net Core, C#, ASP.Net, and .Net Entity Framework Experience working with SQL, Mongo, NoSQL, or any RDBMS database system Experience working with HTML, CSS, and TypeScript Experience building APIs, and SDKs and managing an API lifecycle (key management, logging & auditing, security, etc.) Tenacious and aggressive troubleshooting skills Strong analytical, conceptual, innovative, optimization, and problem-solving abilities Led and mentor a team of developers, fostering a collaborative environment, conducting code reviews, and facilitating continuous technical improvement. Guide the design, development, and implementation of scalable software solutions while ensuring adherence to best coding practices and architecture patterns. Collaborate with cross-functional teams to align on project goals, timelines, and deliverables while ensuring timely execution and quality delivery. Benefits & Perks:- · 100 % REMOTE WORK · Monday - Friday work
Posted 3 days ago
6.0 years
0 Lacs
Noida
On-site
The Aristocrat operate a highly skilled team of DevOps engineers who regularly support various teams with CI/CD processes and infrastructure setup. They collaborate closely with development, and studio teams to create and maintain CI/CD pipelines, automate manual support tasks, and also build and manage the environments needed for application deployment and operations. As a DevOps team member, you will be working closely with the Development teams to produce CI/CD pipelines, help with code deployments and optimize the flow of software from development to production. What you will do: Taking care of the GCP, AWS and Azure Cloud Infrastructure (Provisioning/Alerting/Monitoring etc.). Experience in creating Private Networks, Establishing Networking. Must have handled Firewalls and VPN tunnels. Design and document processes for versioning, deployment, and the migration of code between environments. Excellent knowledge of Docker, Kubernetes. Excellent knowledge of Terraform, Ansible. Good knowledge of scripting language Python/Shell/Bash. Good knowledge of CI/CD including GitOps. Good knowledge of Jenkins Pipelines/Groovy & Azure Pipelines. Strong Experience in Linux OS. Experience on working on Production 24X7 support [L2/L3]. Experience of Sever, Storage, and Network operations. Knowledge of Monitoring tools like Grafana, Prometheus, Datadog. Knowledge of Logging tools Cora Logix, ELK, Splunk Intermediate experience with VMware. Experience with JIRA/Confluence or other defect tracking/Wiki system. Good experience with Istio or service mesh [Good to have]. Ability to work with a geographically dispersed team. Able to grasp functional aspects well (quickly and with minimal guidance). What We're Looking For B.Tech. / B.E. / MCA in Computer Science with 6+ years of experience Must have strong analytical and creative problem-solving skills. Able to challenge the status-quo and constantly suggest improvements. Demonstrates an extremely high level of accuracy and attention to detail. Must have strong communication skills and able to work with team. Ability to drive discussions towards conclusions. Articulate and should be able to express ideas and issues without inhibitions. Why Aristocrat? Aristocrat is a world leader in gaming content and technology, and a top-tier publisher of free-to-play mobile games. We deliver great performance for our B2B customers and bring joy to the lives of the millions of people who love to play our casino and mobile games. And while we focus on fun, we never forget our responsibilities. We strive to lead the way in responsible gameplay, and to lift the bar in company governance, employee wellbeing and sustainability. We’re a diverse business united by shared values and an inspiring mission to bring joy to life through the power of play. We aim to create an environment where individual differences are valued, and all employees have the opportunity to realize their potential. We welcome and encourage applications from all people regardless of age, gender, race, ethnicity, cultural background, disability status or LGBTQ+ identity. EEO M/F/D/V World Leader in Gaming Entertainment Robust benefits package Global career opportunities Our Values All about the Player Talent Unleashed Collective Brilliance Good Business Good Citizen Travel Expectations None Additional Information Depending on the nature of your role, you may be required to register with the Nevada Gaming Control Board (NGCB) and/or other gaming jurisdictions in which we operate. At this time, we are unable to sponsor work visas for this position. Candidates must be authorized to work in the job posting location for this position on a full-time basis without the need for current or future visa sponsorship.
Posted 3 days ago
5.0 years
0 Lacs
West Bengal
On-site
Job Information Date Opened 30/07/2025 Job Type Full time Industry IT Services Work Experience 5+ Years City Kolkata Province West Bengal Country India Postal Code 700091 About Us We are a fast growing technology company specializing in current and emerging internet, cloud and mobile technologies. Job Description CodelogicX is a forward-thinking tech company dedicated to pushing the boundaries of innovation and delivering cutting-edge solutions. We are seeking a Senior DevOps Engineer with at least 5 years of hands-on experience in building, managing, and optimizing scalable infrastructure and CI/CD pipelines. The ideal candidate will play a crucial role in automating deployment workflows, securing cloud environments and managing container orchestration platforms. You will leverage your expertise in AWS, Kubernetes, ArgoCD, and CI/CD to streamline our development processes, ensure the reliability and scalability of our systems, and drive the adoption of best practices across the team. Key Responsibilities: Design, implement, and maintain CI/CD pipelines using GitHub Actions and Bitbucket Pipelines. Develop and manage Infrastructure as Code (IaC) using Terraform for AWS-based infrastructure. Setup and administer SFTP servers on cloud-based VMs using chroot configurations and automate file transfers to S3-backed Glacier . Manage SNS for alerting and notification integration. Ensure cost optimization of AWS services through billing reviews and usage audits. Implement and maintain secure secrets management using AWS KMS , Parameter Store , and Secrets Manager . Configure, deploy, and maintain a wide range of AWS services, including but not limited to: Compute Services o Provision and manage compute resources using EC2, EKS, AWS Lambda, and EventBridge for compute-driven, serverless and event-driven architectures. Storage & Content Delivery o Manage data storage and archival solutions using S3, Glacier, and content delivery through CloudFront. Networking & Connectivity o Design and manage secure network architectures with VPCs, Load Balancers, Security Groups, VPNs, and Route 53 for DNS routing and failover. Ensure proper functioning of Network Services like TCP/IP, reverse proxies (e.g., NGINX). Monitoring & Observability o Implement monitoring, logging, and tracing solutions using CloudWatch, Prometheus, Grafana, ArgoCD, and OpenTelemetry to ensure system health and performance visibility. Database Services o Deploy and manage relational databases via RDS for MySQL, PostgreSQL, Aurora, and healthcare-specific FHIR database configurations. Security & Compliance o Enforce security best practices using IAM (roles, policies), AWS WAF, Amazon Inspector, GuardDuty, Security Hub, and Trusted Advisor to monitor, detect, and mitigate risks. GitOps o Apply excellent knowledge of GitOps practices, ensuring all infrastructure and application configuration changes are tracked and versioned through Git commits. Architect and manage Kubernetes environments (EKS) , implementing Helm charts, ingress controllers, autoscaling (HPA/VPA), and service meshes (Istio), troubleshoot advanced issues related to pods, services, DNS, and kubelets. Apply best practices in Git workflows (trunk-based, feature branching) in both monorepo and multi-repo environments. Maintain, troubleshoot, and optimize Linux-based systems (Ubuntu, CentOS, Amazon Linux). Support the engineering and compliance teams by addressing requirements for HIPAA, GDPR, ISO 27001, SOC 2 , and ensuring infrastructure readiness. Perform rollback and hotfix procedures with minimal downtime. Collaborate with developers to define release and deployment processes. Manage and standardize build environments across dev, staging, and production. Manage release and deployment processes across dev, staging, and production. Work cross-functionally with development and QA teams. Lead incident postmortems and drive continuous improvement. Perform root cause analysis and implement corrective/preventive actions for system incidents. Set up automated backups/snapshots, disaster recovery plans, and incident response strategies. Ensure on-time patching. Mentor junior DevOps engineers. Requirements Required Qualifications: Bachelor's degree in Computer Science, Engineering, or equivalent practical experience. 5+ years of proven DevOps engineering experience in cloud-based environments. Advanced knowledge of AWS , Terraform , CI/CD tools , and Kubernetes (EKS) . Strong scripting and automation mindset. Solid experience with Linux system administration and networking. Excellent communication and documentation skills. Ability to collaborate across teams and lead DevOps initiatives independently. Preferred Qualifications: Experience with infrastructure as code tools such as Terraform or CloudFormation. Experience with GitHub Actions is a plus. Certifications in AWS (e.g., AWS DevOps Engineer, AWS SysOps Administrator) or Kubernetes (CKA/CKAD). Experience working in regulated environments (e.g., healthcare or fintech). Exposure to container security tools and cloud compliance scanners. Experience: 5-10 Years Working Mode: Hybrid Job Type: Full-Time Location: Kolkata Benefits Health insurance Hybrid working mode Provident Fund Parental leave Yearly Bonus Gratuity
Posted 3 days ago
6.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Data Integration Specialist – Senior The opportunity We are seeking a talented and experienced Integration Specialist with 3–6 years of experience to join our growing Digital Integration team. The ideal candidate will play a pivotal role in designing, building, and deploying scalable and secure solutions that support business transformation, system integration, and automation initiatives across the enterprise. Your Key Responsibilities Work with clients to assess existing integration landscapes and recommend modernization strategies using MuleSoft. Translate business requirements into technical designs, reusable APIs, and integration patterns. Develop, deploy, and manage MuleSoft APIs and integrations on Anypoint Platform (CloudHub, Runtime Fabric, Hybrid). Collaborate with business and IT stakeholders to define integration standards, SLAs, and governance models. Implement error handling, logging, monitoring, and alerting using Anypoint Monitoring and third-party tools. Maintain integration artifacts and documentation, including RAML specifications, flow diagrams, and interface contracts. Ensure performance tuning, scalability, and security best practices are followed across integration solutions. Support CI/CD pipelines, version control, and DevOps processes for MuleSoft assets using platforms like Azure DevOps or GitLab. Collaborate with cross-functional teams (Salesforce, SAP, Data, Cloud, etc.) to deliver end-to-end connected solutions. Stay current with MuleSoft platform capabilities and industry integration trends to recommend improvements and innovations. Troubleshoot integration issues and perform root cause analysis in production and non-production environments. Contribute to internal knowledge-sharing, technical mentoring, and process optimization. Strong SQL, data integration and handling skills Exposure to AI Models ,Python and using them in Data Cleaning/Standardization. To qualify for the role, you must have 3–6 years of hands-on experience with MuleSoft Anypoint Platform and Anypoint Studio Strong experience with API-led connectivity and reusable API design (System, Process, Experience layers). Proficient in DataWeave transformations, flow orchestration, and integration best practices. Experience with API lifecycle management including design, development, publishing, governance, and monitoring. Solid understanding of integration patterns (synchronous, asynchronous, event-driven, batch). Hands-on experience with security policies, OAuth, JWT, client ID enforcement, and TLS. Experience in working with cloud platforms (Azure, AWS, or GCP) in the context of integration projects. Knowledge of performance tuning, capacity planning, and error handling in MuleSoft integrations. Experience in DevOps practices including CI/CD pipelines, Git branching strategies, and automated deployments. Experience in data intelligence cloud platforms like Snowflake, Azure, data bricks Ideally, you’ll also have MuleSoft Certified Developer or Integration Architect certification. Exposure to monitoring and logging tools (e.g., Splunk, Elastic, Anypoint Monitoring). Strong communication and interpersonal skills to work with technical and non-technical stakeholders. Ability to document integration requirements, user stories, and API contracts clearly and concisely. Experience in agile environments and comfort working across multiple concurrent projects. Ability to mentor junior developers and contribute to reusable component libraries and coding standards. What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career. The freedom and flexibility to handle your role in a way that’s right for you. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Senior – Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 3 - 7 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 3-7 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models and systems. Utilize optimization tools and techniques, including MIP (Mixed Integer Programming). Drive DevOps and MLOps practices, covering continuous integration, deployment, and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
Kochi, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Senior – Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 3 - 7 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 3-7 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models and systems. Utilize optimization tools and techniques, including MIP (Mixed Integer Programming). Drive DevOps and MLOps practices, covering continuous integration, deployment, and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Senior – Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 3 - 7 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 3-7 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models and systems. Utilize optimization tools and techniques, including MIP (Mixed Integer Programming). Drive DevOps and MLOps practices, covering continuous integration, deployment, and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 3 days ago
6.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Data Integration Specialist – Senior The opportunity We are seeking a talented and experienced Integration Specialist with 3–6 years of experience to join our growing Digital Integration team. The ideal candidate will play a pivotal role in designing, building, and deploying scalable and secure solutions that support business transformation, system integration, and automation initiatives across the enterprise. Your Key Responsibilities Work with clients to assess existing integration landscapes and recommend modernization strategies using MuleSoft. Translate business requirements into technical designs, reusable APIs, and integration patterns. Develop, deploy, and manage MuleSoft APIs and integrations on Anypoint Platform (CloudHub, Runtime Fabric, Hybrid). Collaborate with business and IT stakeholders to define integration standards, SLAs, and governance models. Implement error handling, logging, monitoring, and alerting using Anypoint Monitoring and third-party tools. Maintain integration artifacts and documentation, including RAML specifications, flow diagrams, and interface contracts. Ensure performance tuning, scalability, and security best practices are followed across integration solutions. Support CI/CD pipelines, version control, and DevOps processes for MuleSoft assets using platforms like Azure DevOps or GitLab. Collaborate with cross-functional teams (Salesforce, SAP, Data, Cloud, etc.) to deliver end-to-end connected solutions. Stay current with MuleSoft platform capabilities and industry integration trends to recommend improvements and innovations. Troubleshoot integration issues and perform root cause analysis in production and non-production environments. Contribute to internal knowledge-sharing, technical mentoring, and process optimization. Strong SQL, data integration and handling skills Exposure to AI Models ,Python and using them in Data Cleaning/Standardization. To qualify for the role, you must have 3–6 years of hands-on experience with MuleSoft Anypoint Platform and Anypoint Studio Strong experience with API-led connectivity and reusable API design (System, Process, Experience layers). Proficient in DataWeave transformations, flow orchestration, and integration best practices. Experience with API lifecycle management including design, development, publishing, governance, and monitoring. Solid understanding of integration patterns (synchronous, asynchronous, event-driven, batch). Hands-on experience with security policies, OAuth, JWT, client ID enforcement, and TLS. Experience in working with cloud platforms (Azure, AWS, or GCP) in the context of integration projects. Knowledge of performance tuning, capacity planning, and error handling in MuleSoft integrations. Experience in DevOps practices including CI/CD pipelines, Git branching strategies, and automated deployments. Experience in data intelligence cloud platforms like Snowflake, Azure, data bricks Ideally, you’ll also have MuleSoft Certified Developer or Integration Architect certification. Exposure to monitoring and logging tools (e.g., Splunk, Elastic, Anypoint Monitoring). Strong communication and interpersonal skills to work with technical and non-technical stakeholders. Ability to document integration requirements, user stories, and API contracts clearly and concisely. Experience in agile environments and comfort working across multiple concurrent projects. Ability to mentor junior developers and contribute to reusable component libraries and coding standards. What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career. The freedom and flexibility to handle your role in a way that’s right for you. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 3 days ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
📌 𝗣𝗼𝘀𝗶𝘁𝗶𝗼𝗻: 𝗦𝗼𝘂𝗿𝗰𝗶𝗻𝗴 𝗠𝗮𝗻𝗮𝗴𝗲𝗿- 𝗥𝗲𝗮𝗹 𝗘𝘀𝘁𝗮𝘁𝗲 📍 𝗪𝗼𝗿𝗸 𝗟𝗼𝗰𝗮𝘁𝗶𝗼𝗻: 𝗞𝗵𝗮𝗿, Mumbai. 📝 𝗣𝗼𝘀𝗶𝘁𝗶𝗼𝗻 𝗦𝘂𝗺𝗺𝗮𝗿𝘆 – 𝗦𝗼𝘂𝗿𝗰𝗶𝗻𝗴 𝗠𝗮𝗻𝗮𝗴𝗲𝗿 (𝗥𝗲𝗮𝗹 𝗘𝘀𝘁𝗮𝘁𝗲) As the Sourcing Manager – Real Estate, you will be instrumental in expanding and managing our channel partner ecosystem across Khar, Bandra, and Santacruz. With 2–3 years of real estate field experience, you’ll identify, source, and onboard channel partners, and proactively generate new leads from market areas and business/industrial parks. 🎓 𝗤𝘂𝗮𝗹𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 & 𝗦𝗸𝗶𝗹𝗹𝘀 • Graduate in any stream • 2–3 years of proven experience in Real Estate • Excellent Communication Skills 🧰 𝗖𝗼𝗿𝗲 𝗦𝗸𝗶𝗹𝗹𝘀 1. Real Estate Market Analysis 2. Negotiation & Channel Partner Relations 3. Project Management & Sales Strategy 4. Communication & Lead Generation 5. Channel Sales & Network Building ✅ 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀 • Field experience in Real Estate, Telecom or FMCG is mandatory • Sourcing and onboarding Channel Partners to generate business • Identifying and engaging clients via industrial/business parks • Innovating and executing lead generation activities • Staying updated on product offerings and market positioning • Designing marketing & sales strategies to drive volumes • Preparing weekly/monthly MIS reports on site walk-ins & database entries • Scheduling client meetings to generate new business • Logging daily channel partner visits into system • Meeting or exceeding personal sourcing targets • Supporting the closing team on weekends (Sat & Sun) • Addressing queries from CPs and buyers to maintain strong relations • Leveraging networking to reach potential CPs regularly • Researching and identifying channel partners in primary market • Maintaining contact with partners in 𝗕𝗮𝗻𝗱𝗿𝗮, 𝗞𝗵𝗮𝗿 & 𝗦𝗮𝗻𝘁𝗮𝗰𝗿𝘂𝘇 𝗺𝗮𝗿𝗸𝗲𝘁𝘀. ✉️ 𝗛𝗼𝘄 𝘁𝗼 𝗔𝗽𝗽𝗹𝘆 CVs can be emailed to: 𝘃𝗿𝗶𝘀𝗵𝗮𝗹𝗶.𝗵𝗿𝗶𝗽𝗽𝗹𝗲@𝗴𝗺𝗮𝗶𝗹.𝗰𝗼𝗺
Posted 3 days ago
4.0 - 12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Title : Google Cloud DevOps Engineer Location : PAN India The Opportunity: Publicis Sapient is looking for a Cloud & DevOps Engineer to join our team of bright thinkers and enablers. You will use your problem-solving skills, craft & creativity to design and develop infrastructure interfaces for complex business applications. Contribute ideas for improvements in Cloud and DevOps practices, delivering innovation through automation. We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions. Your Impact OR Responsibilities: Combine your technical expertise and problem-solving passion to work closely with clients, turning complex ideas into end-to-end solutions that transform our clients’ business. Lead and support the implementation of Engineering side of Digital Business Transformations with cloud, multi-cloud, security, observability and DevOps as technology enablers. Responsible for Building Immutable Infrastructure & maintain highly scalable, secure, and reliable cloud infrastructure, which is optimized for performance cost, and compliant with security standards to prevent security breaches Enable our customers to accelerate their software development lifecycle and reduce the time-to-market for their products or services. Your Skills & Experience: 4 to 12 years of experience in Cloud & DevOps with Full time Bachelor’s /Master’s degree (Science or Engineering preferred) Expertise in below DevOps & Cloud tools: GCP (Compute, IAM, VPC, Storage, Serverless, Database, Kubernetes, Pub-Sub, Operations Suit) Configuration and monitoring DNS, APP Servers, Load Balancer, Firewall for high volume traffic Extensive experience in designing, implementing, and maintaining infrastructure as code using preferably Terraform or Cloud Formation/ARM Templates/Deployment Manager/Pulumi Experience Managing Container Infrastructure (On Prem & Managed e.g., AWS ECS, EKS, or GKE) Design, implement and Upgrade container infrastructure e.g., K8S Cluster & Node Pools Create and maintain deployment manifest files for microservices using HELM Utilize service mesh Istio to create gateways, virtual services, traffic routing and fault injection Troubleshoot and resolve container infrastructure & deployment issues Continues Integration & Continues Deployment Develop and maintain CI/CD pipelines for software delivery using Git and tools such as Jenkins, GitLab, CircleCI, Bamboo and Travis CI Automate build, test, and deployment processes to ensure efficient release cycles and enforce software development best practices e.g., Quality Gates, Vulnerability Scans etc. Automate Build & Deployment process using Groovy, GO, Python, Shell, PowerShell Implement DevSecOps practices and tools to integrate security into the software development and deployment lifecycle. Manage artifact repositories such as Nexus and JFrog Artifactory for version control and release management. Design, implement, and maintain observability, monitoring, logging and alerting using below tools Observability: Jaeger, Kiali, CloudTrail, Open Telemetry, Dynatrace Logging: Elastic Stack (Elasticsearch, Logstash, Kibana), Fluentd, Splunk Monitoring: Prometheus, Grafana, Datadog, New Relic Good to Have: Associate Level Public Cloud Certifications Terraform Associate Level Certification Benefits of Working Here: Gender-Neutral Policy 18 paid holidays throughout the year for NCR/BLR (22 For Mumbai) Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well being Learn more about us at www.publicissapient.com or explore other career opportunities here
Posted 3 days ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA The Security Managed Services Engineer (L2) is a developing engineering role, responsible for providing a managed service to clients to ensure that their Security Infrastructures and systems remain operational. Through the proactive monitoring, identifying, investigating, and resolving of technical incidents and problems, this role is able to restore service to clients. The primary objective of this role is to proactively review client requests or tickets and apply technical/process knowledge to resolve them without breaching service level agreement (SLA) and focuses on second-line support for incidents and requests with a medium level of complexity. The Security Managed Services Engineer (L2) may also contribute to / support on project work as and when required. What You'll Be Doing Key Responsibilities: Proactively monitors the work queues. Performs operational tasks to resolve all incidents/requests in a timely manner and within the agreed SLA. Updates tickets with resolution tasks performed. Identifies, investigates, analyses issues and errors prior to or when they occur, and logs all such incidents in a timely manner. Captures all required and relevant information for immediate resolution. Provides second level support to all incidents, requests and identifies the root cause of incidents and problems. Communicates with other teams and clients for extending support. Executes changes with clear identification of risks and mitigation plans to be captured into the change record. Follows the shift handover process highlighting any key tickets to be focused on along with a handover of upcoming critical tasks to be carried out in the next shift. Escalates all tickets to seek the right focus from CoE and other teams, if needed continue the escalations to management. Works with automation teams for effort optimization and automating routine tasks. Ability to work across various other resolver group (internal and external) like Service Provider, TAC, etc. Identifies problems and errors before they impact a client’s service. Provides Assistance to L1 Security Engineers for better initial triage or troubleshooting. Leads and manages all initial client escalation for operational issues. Contributes to the change management process by logging all change requests with complete details for standard and non-standard including patching and any other changes to Configuration Items. Ensures all changes are carried out with proper change approvals. Plans and executes approved maintenance activities. Audits and analyses incident and request tickets for quality and recommends improvements with updates to knowledge articles. Produces trend analysis reports for identifying tasks for automation, leading to a reduction in tickets and optimization of effort. May also contribute to / support on project work as and when required. May work on implementing and delivering Disaster Recovery functions and tests. Performs any other related task as required. Knowledge and Attributes: Ability to communicate and work across different cultures and social groups. Ability to plan activities and projects well in advance, and takes into account possible changing circumstances. Ability to maintain a positive outlook at work. Ability to work well in a pressurized environment. Ability to work hard and put in longer hours when it is necessary. Ability to apply active listening techniques such as paraphrasing the message to confirm understanding, probing for further relevant information, and refraining from interrupting. Ability to adapt to changing circumstances. Ability to place clients at the forefront of all interactions, understanding their requirements, and creating a positive client experience throughout the total client journey. Academic Qualifications and Certifications: Bachelor's degree or equivalent qualification in IT/Computing (or demonstrated equivalent work experience). Certifications relevant to services supported. Certifications carry additional weightage on the candidate’s qualification for the role. CCNA certification in must, CCNP in Security or PCNSE or checkpoint certification is good to have. Required Experience: Moderate level of relevant managed services experience handling Security Infrastructure. Moderate level of knowledge in ticketing tools preferably Service Now. Moderate level of working knowledge of ITIL processes. Moderate level of experience working with vendors and/or 3rd parties. Workplace type: On-site Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today.
Posted 3 days ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA As a Cross Technology Managed Services Engineer (L2) at NTT DATA, you will play an essential role in maintaining our clients' IT infrastructure and systems. Your expertise will help keep everything running smoothly by proactively monitoring, identifying, investigating, and resolving technical incidents and problems. You'll be the go-to person to restore services and ensure our clients' satisfaction. Your typical day involves managing work queues, addressing incidents and requests within agreed SLAs, and updating tickets with the actions taken. By identifying, analysing, and logging issues before they escalate, you'll be instrumental in maintaining service quality. You'll also collaborate closely with other teams and clients to provide second-level support, ensuring seamless communication and efficient problem resolution. You will execute changes meticulously, understanding and mitigating risks, and contribute to the change management process with detailed documentation. Your role includes auditing incident and request tickets for quality, recommending improvements, and identifying tasks for automation to enhance efficiency. Additionally, you'll handle client escalations with professionalism and assist in disaster recovery functions and tests when necessary. Working within our diverse and inclusive environment, you'll help drive the optimization of efforts by working with automation teams and supporting L1 Engineers. Your responsibility also extends to contributing to various projects, ensuring that all changes are approved, and maintaining a positive outlook even in high-pressure situations. To thrive in this role, you need to have: Moderate-level experience in managed services roles handling cross-technology infrastructure. Knowledge of ticketing tools, preferably ServiceNow. Familiarity with ITIL processes and experience working with vendors and third parties. Proficiency in planning activities and projects, taking changing circumstances into account. Ability to work longer hours when necessary and adapt to changing circumstances with ease. Proven ability to communicate effectively and work across different cultures and social groups. Positive outlook and ability to work well under pressure. Commitment to placing clients at the forefront of all interactions, understanding their requirements, and ensuring a positive experience. Bachelor's degree in IT/Computing or equivalent work experience. Workplace type: On-site Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today.
Posted 3 days ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA The Networking Managed Services Engineer (L2) is a developing engineering role, responsible for providing a managed service to clients to ensure that their IT infrastructure and systems remain operational through proactively monitoring, identifying, investigating, and resolving technical incidents and problems and restoring service to clients. The primary objective of this role is to proactively review client requests or tickets and apply technical/process knowledge to resolve them without breaching service level agreement (SLA) and focuses on second-line support for incidents and requests with a medium level of complexity. The Networking Managed Services Engineer (L2) may also contribute to / support on project work as and when required. What You'll Be Doing Key Responsibilities: Proactively monitors the work queues. Performs operational tasks to resolve all incidents/requests in a timely manner and within the agreed SLA. Updates tickets with resolution tasks performed. Identifies, investigates, analyzes issues and errors prior to or when they occur, and log all such incidents in a timely manner. Captures all required and relevant information for immediate resolution. Provides second level support to all incidents, requests and identifies the root cause of incidents and problems. Communicates with other teams and clients for extending support. Executes changes with clear identification of risks and mitigation plans to be captured into the change record. Follows the shift handover process highlighting any key tickets to be focused on along with a handover of upcoming critical tasks to be carried out in the next shift. Escalates all tickets to seek the right focus from CoE and other teams, if needed continue the escalations to management. Works with automation teams for effort optimization and automating routine tasks. Coaches Service Desk and L1 teams for technical and behavioural skills. Establishes monitoring for client infrastructure. Identifies problems and errors before they impact a client’s service. Leads and manages all initial client escalation for operational issues. Contributes to the change management process by logging all change requests with complete details for standard and non-standard including patching and any other changes to Configuration Items. Ensures all changes are carried out with proper change approvals. Plans and executes approved maintenance activities. Audits and analyses incident and request tickets for quality and recommends improvements with updates to knowledge articles. Produces trend analysis reports for identifying tasks for automation, leading to a reduction in tickets and optimization of effort. May also contribute to / support on project work as and when required. May work on implementing and delivering Disaster Recovery functions and tests. Performs any other related task as required. Academic Qualifications and Certifications: Bachelor's degree or equivalent qualification in IT/Computing (or demonstrated equivalent work experience). CCNP or equivalent certification. Certifications relevant to the services provided (certifications carry additional weightage on a candidate’s qualification for the role). Required Experience: Looking for 5yrs - 8yrs experience in networking. Minimum 2yrs - 3yrs in SDN ACI. Moderate level knowledge in ticketing tools preferably Service Now. Workplace type: On-site Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today.
Posted 3 days ago
0.0 - 5.0 years
0 Lacs
Coimbatore, Tamil Nadu
On-site
Job Summary: The Head of Mobile Technology will be responsible for managing and growing the iOS and Android development teams, owning the end-to-end delivery of mobile features, and ensuring platform stability, scalability, and innovation. This leadership role involves strategic planning, team hiring, technical mentoring, and alignment with HealthSy’s long-term vision. Key Responsibilities: 1.Team Management & Leadership Lead and manage iOS and Android teams, including daily standups, progress reviews, and sprint planning. Plan, hire, and onboard developers for both platforms in collaboration with the HR team. Cultivate a high-performance, agile engineering culture focused on quality, accountability, and innovation. 2.Technical Strategy & Execution Own the mobile architecture and technology roadmap across both platforms. Ensure consistency, feature parity, and seamless UX between iOS and Android apps. Guide teams in implementing secure, efficient, and scalable code in Swift (iOS) and Kotlin (Android). Collaborate closely with product, design, QA, and backend teams to drive releases. 3.Quality Control & Process Set up and enforce best practices in code quality, testing, CI/CD, and version control. Perform regular code and architecture reviews to maintain performance and reliability. Ensure robust error tracking, logging, and post-release monitoring. 4.Stakeholder Communication & Reporting Act as the technical voice in leadership discussions and product planning. Prepare and present progress reports, tech KPIs, and sprint outcomes to founders and business heads. Required Skills & Qualifications Minimum 4+ years of hands-on experience in mobile app development, including team leadership roles. Strong expertise in Swift (iOS) and Kotlin (Android) with a solid understanding of mobile architectures (MVVM, Clean, etc.). Experience managing and scaling cross-functional teams. Proven ability to translate product goals into tech milestones and guide teams to execution. Strong understanding of API integration, mobile security, performance optimization, and App Store/Play Store release cycles. Preferred Skills Experience working in health-tech or consumer-facing platforms. Knowledge of healthcare compliance and data protection regulations (HIPAA/GDPR equivalents). Familiarity with Flutter or hybrid frameworks (optional but a plus). Backend awareness (Node.js/PHP/Java), Firebase, or real-time database handling is advantageous. Work Setup: On-site (Coimbatore office) Compensation: Competitive salary + ESOP options (if applicable) Job Type: Full-time Ability to commute/relocate: Coimbatore, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: Swift: 1 year (Required) Kotlin: 1 year (Required) Mobile applications: 5 years (Required) Work Location: In person
Posted 3 days ago
4.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
About Media.net : Media.net is a leading, global ad tech company that focuses on creating the most transparent and efficient path for advertiser budgets to become publisher revenue. Our proprietary contextual technology is at the forefront of enhancing Programmatic buying, the latest industry standard in ad buying for digital platforms. The Media.net platform powers major global publishers and ad-tech businesses at scale across ad formats like display, video, mobile, native, as well as search. Media.net’s U.S. HQ is based in New York, and the Global HQ is in Dubai. With office locations and consultant partners across the world, Media.net takes pride in the value-add it offers to its 50+ demand and 21K+ publisher partners, in terms of both products and services. Responsibilities (What You’ll Do) Infrastructure Management: Oversee and maintain the infrastructure that supports the ad exchange applications. This includes load balancers, data stores, CI/CD pipelines, and monitoring stacks. Continuously improve infrastructure resilience, scalability, and efficiency to meet the demands of massive request volume and stringent latency requirements. Developing policies and procedures that improve overall platform stability and participate in shared On-call schedule Collaboration with Developers: Work closely with developers to establish and uphold quality and performance benchmarks, ensuring that applications meet necessary criteria before they are deployed to production. Participate in design reviews and provide feedback on infrastructure-related aspects to improve system performance and reliability. Building Tools for Infra Management: Develop tools to simplify and enhance infrastructure management, automate processes, and improve operational efficiency. These tools may address areas such as monitoring, alerting, deployment automation, and failure detection and recovery, which are critical in minimizing latency and maintaining uptime. Performance Optimization: Focus on reducing latency and maximizing efficiency across all components, from request handling in load balancers to database optimization. Implement best practices and tools for performance monitoring, including real-time analysis and response mechanisms. Who Should Apply B.Tech/M.Tech or equivalent in Computer Science, Information Technology, or a related field. 2–4 years of experience managing services in large-scale distributed systems. Strong understanding of networking concepts (e.g., TCP/IP, routing, SDN) and modern software architectures. Proficiency in programming and scripting languages such as Python, Go, or Ruby, with a focus on automation. Experience with container orchestration tools like Kubernetes and virtualization platforms (preferably GCP). Ability to independently own problem statements, manage priorities, and drive solutions. Preferred Skills & Tools Expertise: Infrastructure as Code: Experience with Terraform. Configuration management tools like Nix, Ansible. Monitoring and Logging Tools: Expertise with Prometheus, Grafana, or ELK stack. OLAP databases : Clickhouse and Apache druid. CI/CD Pipelines: Hands-on experience with Jenkins, or ArgoCD. Databases: Proficiency in MySQL (relational) or Redis (NoSQL). Load Balancers Servers: Familiarity with haproxy or Nginx. Strong knowledge of operating systems and networking fundamentals. Experience with version control systems such as Git.
Posted 3 days ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA The Managed Services Cross Technology Engineer (L2) is a developing engineering role, responsible for providing a managed service to clients to ensure that their IT infrastructure and systems remain operational. Through the proactive monitoring, identifying, investigating, and resolving of technical incidents and problems, the Managed Services Cross Technology Engineer (L2) is able to restore service to clients. The primary objective of this role is to proactively review client requests or tickets and apply technical/process knowledge to resolve them without breaching service level agreement (SLA). The Managed Services Cross Technology Engineer (L2) focuses on second-line support for incidents and requests with a medium level of complexity and focusses across two or more technology domains - Cloud, Security, Networking, Applications and / or Collaboration etc. This role may also contribute to / support on project work as and when required. What You'll Be Doing Key Responsibilities: Proactively monitors the work queues. Performs operational tasks to resolve all incidents/requests in a timely manner and within the agreed SLA. Updates tickets with resolution tasks performed. Identifies, investigates, analyses issues and errors prior to or when they occur, and logs all such incidents in a timely manner. Captures all required and relevant information for immediate resolution. Provides second level support to all incidents, requests and identifies the root cause of incidents and problems. Communicates with other teams and clients for extending support. Executes changes with clear identification of risks and mitigation plans to be captured into the change record. Follows the shift handover process highlighting any key tickets to be focussed on along with a handover of upcoming critical tasks to be carried out in the next shift. Escalates all tickets to seek the right focus from CoE and other teams, if needed continue the escalations to management. Works with automation teams for effort optimization and automating routine tasks. Ability to work across various other resolver group (internal and external) like Service Provider, TAC, etc. Identifies problems and errors before they impact a client’s service. Provides Assistance to L1 Security Engineers for better initial triage or troubleshooting. Leads and manages all initial client escalation for operational issues. Contributes to the change management process by logging all change requests with complete details for standard and non-standard including patching and any other changes to Configuration Items. Ensures all changes are carried out with proper change approvals. Plans and executes approved maintenance activities. Audits and analyses incident and request tickets for quality and recommends improvements with updates to knowledge articles. Produces trend analysis reports for identifying tasks for automation, leading to a reduction in tickets and optimization of effort. May also contribute to / support on project work as and when required. May work on implementing and delivering Disaster Recovery functions and tests. Performs any other related task as required. Knowledge and Attributes: Ability to communicate and work across different cultures and social groups. Ability to plan activities and projects well in advance, and takes into account possible changing circumstances. Ability to maintain a positive outlook at work. Ability to work well in a pressurized environment. Ability to work hard and put in longer hours when it is necessary. Ability to apply active listening techniques such as paraphrasing the message to confirm understanding, probing for further relevant information, and refraining from interrupting. Ability to adapt to changing circumstances. Ability to place clients at the forefront of all interactions, understanding their requirements, and creating a positive client experience throughout the total client journey. Academic Qualifications and Certifications: Bachelor's degree or equivalent qualification in IT/Computing (or demonstrated equivalent work experience). Certifications relevant to the services provided (certifications carry additional weightage on a candidate’s qualification for the role). Relevant certifications include (but not limited to) - CCNP or equivalent certification CCNA certification in must, CCNP in Security or PCNSE certification is good to have Microsoft Certified: Azure Administrator Associate AWS Certified: Solutions Architect Associate Veeam Certified Engineer VMware certified Professional: Data Centre Virtualization Zerto, pure, vxrail Google Cloud Platform (gcp) Oracle Cloud Infrastructure (oci) SAP Certified Technology Associate - OS DB Migration for SAP NetWeaver 7.4 SAP Technology Consultant SAP Certified Technology Associate - SAP HANA 2.0 Oracle Cloud Infrastructure Architect Professional IBM Certified System Administrator - WebSphere Application Server Network Required Experience: Moderate level years of relevant managed services experience handling cross technology infrastructure. Moderate level knowledge in ticketing tools preferably Service Now. Moderate level working knowledge of ITIL processes. Moderate level experience working with vendors and/or 3rd parties. Workplace type: On-site Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today.
Posted 3 days ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Develop and maintain Node.js-based server-side APIs using Express.js. Manage databases (e.g., MongoDB) for data storage and retrieval. Implement authentication and authorization mechanisms (e.g., JWT, OAuth) for access control. Configure middleware and routing for API endpoints. Handle errors, implement error-handling middleware, and maintain error logs. Optimize API endpoints for performance, scalability, and resource efficiency. Write comprehensive unit and integration tests (Mocha, Chai, Jest). Apply advanced security practices, including input validation and protection against common vulnerabilities. Collaborate with DevOps teams to set up CI/CD pipelines for automated testing and deployment. Create detailed API documentation, code comments, and README files. Set up monitoring tools (e.g., Prometheus, Grafana) and logging solutions (e.g., ELK stack) Understand and configure load balancing for scalability. Work with containerization technologies like Docker and Kubernetes. Proficiently use Git for version control and participate in code reviews. Collaborate with cross-functional teams, including front-end developers and QA engineers. Follow Agile methodologies (Scrum, Kanban) for project management. Stay updated with security trends and best practices. Troubleshoot and resolve complex issues and performance bottlenecks. Make architectural decisions to accommodate application scalability. Commit to continuous learning and skill improvement in Node.js development.
Posted 3 days ago
3.0 years
0 Lacs
India
On-site
We need an experienced DevOps Engineer to single-handedly build our Automated Provisioning Service on Google Cloud Platform. You'll implement infrastructure automation that provisions complete cloud environments for B2B customers in under 10 minutes. Core Responsibilities: Infrastructure as Code Implementation Develop Terraform modules for automated GCP resource provisioning Create reusable templates for: GKE cluster deployment with predefined node pools Cloud Storage bucket configuration Cloud DNS and SSL certificate automation IAM roles and service account setup Implement state management and version control for IaC Automation & Orchestration Build Cloud Functions or Cloud Build triggers for provisioning workflows Create automation scripts (Bash/Python) for deployment orchestration Deploy containerized Node.js applications to GKE using Helm charts Configure automated SSL certificate provisioning via Certificate Manager Security & Access Control Implement IAM policies and RBAC for customer isolation Configure secure service accounts with minimal required permissions Set up audit logging and monitoring for all provisioned resources Integration & Deployment Create webhook endpoints to receive provisioning requests from frontend Implement provisioning status tracking and error handling Document deployment procedures and troubleshooting guides Ensure 5-10 minute provisioning time SLA Required Skills & Certifications: MANDATORY Certification (Must have one of the following): Google Cloud Associate Cloud Engineer (minimum requirement) Google Cloud Professional Cloud DevOps Engineer (preferred) Google Cloud Professional Cloud Architect (preferred) Technical Skills (Must Have): 3+ years hands-on experience with Google Cloud Platform Strong Terraform expertise with proven track record GKE/Kubernetes deployment and management experience Proficiency in Bash and Python scripting Experience with CI/CD pipelines (Cloud Build preferred) GCP IAM and security best practices knowledge Ability to work independently with minimal supervision Nice to Have: Experience developing RESTful APIs for service integration Experience with multi-tenant architectures Node.js/Docker containerization experience Helm chart creation and management Deliverables (2-Month Timeline) Month 1: Complete Terraform modules for all GCP resources Working prototype of automated provisioning flow Basic IAM and security implementation Integration with webhook triggers Month 2: Production-ready deployment with error handling Performance optimization (achieve <10 min provisioning) Complete documentation and runbooks Handover and knowledge transfer Technical Environment Primary Tools: Terraform, GCP (GKE, Cloud Storage, Cloud DNS, IAM) Languages: Bash, Python (automation scripts) Orchestration: Cloud Build, Cloud Functions Containerization: Docker, Kubernetes, Helm Ideal Candidate Self-starter who can own the entire DevOps scope independently Strong problem-solver comfortable with ambiguity Excellent time management skills to meet tight deadlines Clear communicator who documents their work thoroughly Important Note: Google Cloud certification is mandatory for this position due to partnership requirements. Please include your certification details and ID number in your application. Application Requirements: Proof of valid Google Cloud certification Examples of similar GCP automation projects GitHub/GitLab links to relevant Terraform modules (if available)
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough