Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description The Primary responsibility of the role is to perform campaign operations to improve visibility of the content in Amazon Prime Video. The role will require the candidate to quickly understand the campaign ops tools and operation workflow tools. Associate need to continuously adapt and learn new features of the program and improve on their acumen to quickly edit and fix up contents. Associate has to follow editing SOP to spot/catch errors in the content. Associate needs to perform content quality check to qualify user experience for content viewing (flow and format quality). Associate will need to use software tools for quality audit, content editing and data capture. The associate will need to be aware of the operations metrics like productivity (Number of titles processed per hour), quality (defect %) and delivery/latency SLA. The associate will be measured on compliance to these Metrics, SLA requirements, QA guidelines, team and personal goals. Associate should be a team player and come up with improvement ideas to their direct report and improve the editing/QA process. The associate will need to often contact stakeholders globally to provide status reports, communicate relevant information and escalate when needed. The role is an individual contributor role. The role requires a graduate degree with exposure to MS office and comfort with numbers. In addition the associate should have attention to detail, good communication skills, and a professional demeanor. The role requires the associate to be comfortable with night shift hours and flexible to extend support during critical business requirements Basic Qualifications Completed under graduation (UG) in any stream Analytical knowledge to solve basic mathematical and logical problems Candidate should be familiar with excel function. Ability to communicate effectively Strong attention to detail in editing content and deep dive and identify root causes of issues Good at problem solving, data analysis and troubleshooting issues related to content editing Preferred Qualifications Ability to meet deadlines in a fast paced work environment driven by complex software systems and processes Self starter, good team player Good interpersonal skills to manage ongoing relationships with program team and inter operations teams Working knowledge of XML standards would be an added advantage Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Tamil Nadu Job ID: A3035562
Posted 2 weeks ago
2.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
JOB_POSTING-3-72598-1 Job Description Role Title: Software Engineer (L09) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~52% women talent. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview This role will be part of the Data Architecture & Analytics group part of CTO organization. The API Service team specializes in provisioning REST APIs for real time interactions with Enterprise Data Lake supporting business decision-making by designing and delivering resilient, scalable, secure and low latency solutions using cutting edge tools and cloud technologies. The team leverages Java Spring Boot Microservice architecture with built-in advanced solutions to accomplish stability while maintaining performance across multiple platforms that provide actionable insights across various business functions. Through collaboration with cross-functional teams, the API team ensures seamless deployment and optimization of real time solutions in the Cloud Foundry. Role Summary/Purpose We are looking for an API Developer to design and develop consumer-centric low latency scalable cloud native applications leveraging Spring & Cloud technologies for our Enterprise Data Lake initiative. This high visibility, fast-paced key initiative will integrate data across internal and external sources, provide analytical insights and integrate with our critical systems. Key Responsibilities Design and develop containerized microservices for cloud native applications using Spring framework Develop low latency Java Springboot APIs and deploy on Pivotal Cloud Foundry (PCF) in a complex data management environment using CI/CD pipelines. Develop integration of Kakfa, Hbase, Redshift, MySQL and Hive databases Research caching techniques and develop solutions for data caches such as Gemfire, Redis Develop and deploy code on on-prem and on AWS Required Skills/Knowledge Experience with deployment of microservice architecture on Pivotal Cloud Foundry platform Experience with public cloud computing platforms such as AWS Experience integrating with load balancers and Protegrity platform for tokenization Experience with Agile project management methods and practices. Demonstrated excellent planning and organizational skills Collaborate across teams of internal and external technical staff, business analysts, software support and operations staff Able to work effectively with multiple teams and stakeholders Desired Skills/Knowledge Good to have experience working in Financial Industry or Credit processing experience. Willing to stay abreast of the latest developments in technology Experience with working on a geographically distributed team managing onshore/offshore resources with shifting priorities. Eligibility Criteria Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) with minimum 2 years of professional Java development experience; or lieu of a degree with 4 years of professional development (Java/J2EE) experience. Minimum 2 years of in-depth experience in design, and development using J2EE/Eclipse, Spring Boot, REST Services in complex large scale environments Minimum 1 year of experience using in memory data grid technology such as GemFire, Redis, Hazelcast Minimum 1 year of experience integration with middleware platforms such as IBM WebSphere, Tibco, Oracle ESB Minimum 1 year of experience integration with Hadoop/Hive, SQL and HBase/NoSQL data stores like Cassandra and MongoDB Work Timings: 3 PM to 12 AM IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, LPP) L4 to L7 Employees who have completed 12 months in the organization and 12 months in current role and level are only eligible. L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L04+ Employees can apply Grade/Level: 09 Job Family Group Information Technology
Posted 2 weeks ago
20.0 years
0 Lacs
India
Remote
AI Centre Ethernet Switching Architect India-remote. person could be based anywhere in India - Remote work will be considered for exceptional profiles Founded by highly respected Silicon Valley veterans - with its design centers established in Santa Clara, California. / Hyderabad/ Bangalore AI Centre Ethernet Switching Architect Position Overview We are seeking a top-notch specialist Architect with over 20 years of experience to join our team in designing and developing Ethernet switches tailored for AI datacenter backend networks. The ideal candidate will have a strong background in digital design, ASIC/FPGA development, Ethernet/TCP/IP protocols, and experience with high-performance interconnect protocols such as InfiniBand, NVLink, Infinity Fabric, UALink, Ultra Ethernet with a focus on delivering high-performance, low-latency solutions for large-scale AI workloads. Key Responsibilities Define and develop the architecture of AI Datacentre Switch Fabric from ground up Performance Modelling and optimization of latency, throughput and power efficiency of switch fabric Decompose the architecture into sub blocks for implementation by design team Implement Ethernet protocols (IEEE 802.3, 100G/400G/800G/1600G), ECMP, and congestion control, packet spraying Apply knowledge of InfiniBand/Ultra Ethernet, NVLink/UALink, or similar protocols for feature implementation. Understanding/experience of IOS/Junos or equivalent software platform Use P4 or related languages for programmable packet processing. Working with design, software, verification team for complete product solutions Documentation of architecture and stay updated on AI networking trends. Required Qualifications Education: MS/PhD in Electrical/Electronic Engineering. Technical Skills: Proficient in Verilog/SystemVerilog for design. Knowledge of Ethernet (IEEE 802.3, 100G/400G/800G/1600G), ECMP, and congestion control. Experience with InfiniBand, NVLink, or similar protocols. Proficiency in P4 or programmable data plane languages. Knowledge of UALink, Ultra Ethernet, or RDMA/RoCE. Soft Skills: Strong problem-solving, communication, and teamwork skills. Position Overview Contact: Uday Mulya Technologies muday_bhaskar@yahoo.com "Mining The Knowledge Community"
Posted 2 weeks ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
JOB_POSTING-3-72598 Job Description Role Title: Software Engineer (L09) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~52% women talent. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview This role will be part of the Data Architecture & Analytics group part of CTO organization. The API Service team specializes in provisioning REST APIs for real time interactions with Enterprise Data Lake supporting business decision-making by designing and delivering resilient, scalable, secure and low latency solutions using cutting edge tools and cloud technologies. The team leverages Java Spring Boot Microservice architecture with built-in advanced solutions to accomplish stability while maintaining performance across multiple platforms that provide actionable insights across various business functions. Through collaboration with cross-functional teams, the API team ensures seamless deployment and optimization of real time solutions in the Cloud Foundry. Role Summary/Purpose We are looking for an API Developer to design and develop consumer-centric low latency scalable cloud native applications leveraging Spring & Cloud technologies for our Enterprise Data Lake initiative. This high visibility, fast-paced key initiative will integrate data across internal and external sources, provide analytical insights and integrate with our critical systems. Key Responsibilities Design and develop containerized microservices for cloud native applications using Spring framework Develop low latency Java Springboot APIs and deploy on Pivotal Cloud Foundry (PCF) in a complex data management environment using CI/CD pipelines. Develop integration of Kakfa, Hbase, Redshift, MySQL and Hive databases Research caching techniques and develop solutions for data caches such as Gemfire, Redis Develop and deploy code on on-prem and on AWS Required Skills/Knowledge Experience with deployment of microservice architecture on Pivotal Cloud Foundry platform Experience with public cloud computing platforms such as AWS Experience integrating with load balancers and Protegrity platform for tokenization Experience with Agile project management methods and practices. Demonstrated excellent planning and organizational skills Collaborate across teams of internal and external technical staff, business analysts, software support and operations staff Able to work effectively with multiple teams and stakeholders Desired Skills/Knowledge Good to have experience working in Financial Industry or Credit processing experience. Willing to stay abreast of the latest developments in technology Experience with working on a geographically distributed team managing onshore/offshore resources with shifting priorities. Eligibility Criteria Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) with minimum 2 years of professional Java development experience; or lieu of a degree with 4 years of professional development (Java/J2EE) experience. Minimum 2 years of in-depth experience in design, and development using J2EE/Eclipse, Spring Boot, REST Services in complex large scale environments Minimum 1 year of experience using in memory data grid technology such as GemFire, Redis, Hazelcast Minimum 1 year of experience integration with middleware platforms such as IBM WebSphere, Tibco, Oracle ESB Minimum 1 year of experience integration with Hadoop/Hive, SQL and HBase/NoSQL data stores like Cassandra and MongoDB Work Timings: 3 PM to 12 AM IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, LPP) L4 to L7 Employees who have completed 12 months in the organization and 12 months in current role and level are only eligible. L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L04+ Employees can apply Grade/Level: 09 Job Family Group Information Technology
Posted 2 weeks ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
JOB_POSTING-3-72598-4 Job Description Role Title: Software Engineer (L09) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~52% women talent. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview This role will be part of the Data Architecture & Analytics group part of CTO organization. The API Service team specializes in provisioning REST APIs for real time interactions with Enterprise Data Lake supporting business decision-making by designing and delivering resilient, scalable, secure and low latency solutions using cutting edge tools and cloud technologies. The team leverages Java Spring Boot Microservice architecture with built-in advanced solutions to accomplish stability while maintaining performance across multiple platforms that provide actionable insights across various business functions. Through collaboration with cross-functional teams, the API team ensures seamless deployment and optimization of real time solutions in the Cloud Foundry. Role Summary/Purpose We are looking for an API Developer to design and develop consumer-centric low latency scalable cloud native applications leveraging Spring & Cloud technologies for our Enterprise Data Lake initiative. This high visibility, fast-paced key initiative will integrate data across internal and external sources, provide analytical insights and integrate with our critical systems. Key Responsibilities Design and develop containerized microservices for cloud native applications using Spring framework Develop low latency Java Springboot APIs and deploy on Pivotal Cloud Foundry (PCF) in a complex data management environment using CI/CD pipelines. Develop integration of Kakfa, Hbase, Redshift, MySQL and Hive databases Research caching techniques and develop solutions for data caches such as Gemfire, Redis Develop and deploy code on on-prem and on AWS Required Skills/Knowledge Experience with deployment of microservice architecture on Pivotal Cloud Foundry platform Experience with public cloud computing platforms such as AWS Experience integrating with load balancers and Protegrity platform for tokenization Experience with Agile project management methods and practices. Demonstrated excellent planning and organizational skills Collaborate across teams of internal and external technical staff, business analysts, software support and operations staff Able to work effectively with multiple teams and stakeholders Desired Skills/Knowledge Good to have experience working in Financial Industry or Credit processing experience. Willing to stay abreast of the latest developments in technology Experience with working on a geographically distributed team managing onshore/offshore resources with shifting priorities. Eligibility Criteria Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) with minimum 2 years of professional Java development experience; or lieu of a degree with 4 years of professional development (Java/J2EE) experience. Minimum 2 years of in-depth experience in design, and development using J2EE/Eclipse, Spring Boot, REST Services in complex large scale environments Minimum 1 year of experience using in memory data grid technology such as GemFire, Redis, Hazelcast Minimum 1 year of experience integration with middleware platforms such as IBM WebSphere, Tibco, Oracle ESB Minimum 1 year of experience integration with Hadoop/Hive, SQL and HBase/NoSQL data stores like Cassandra and MongoDB Work Timings: 3 PM to 12 AM IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, LPP) L4 to L7 Employees who have completed 12 months in the organization and 12 months in current role and level are only eligible. L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L04+ Employees can apply Grade/Level: 09 Job Family Group Information Technology
Posted 2 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description What We Do At Goldman Sachs, we connect people, capital and ideas to help solve problems for our clients. We are a leading global financial services firm providing investment banking, securities and investment management services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. What We Look For Goldman Sachs Asset Management (GSAM) is one of the world’s leading investment managers. GSAM provides institutional and individual investors with investment and advisory solutions, with strategies spanning asset classes, industries, and geographies. We help our clients navigate today’s dynamic markets, and identify the opportunities that shape their portfolios and long-term investment goals. We extend these global capabilities to the world’s leading pension plans, sovereign wealth funds, central banks, insurance companies, financial institutions, endowments, foundations, individuals and family offices. The Client Solutions Group is looking for a Full Stack engineer to assist in the buildout and scaling of the business globally using cloud based hybrid Big Data solutions to provide Sales and Marketing teams seamless experience in achieving their objectives through productivity tools, easy access to analytics and insights along with real time tracking towards their goals. Basic Qualifications Skills And Experience We Are Looking For Minimum 3 years of experience in Java, J2EE and/or Python Strong proficiency in working with data - databases, deriving insights using machine learning. You approach problem solving with an open mind within the context of a team You have exceptional analytical skills, able to apply knowledge and experience in decision-making to arrive at creative and commercial solutions You collaborate with globally-located cross functional team in building customer-centric products Strong written and verbal communication skills Experience integrating with Restful web services Comfort with Agile Operating Models Experience working with a variety of technical and non-technical stakeholders. Preferred Qualifications Experience with micro service architecture Strong proficiency in: Distributed systems, Low-latency services, NoSQL and relational databases Ability to establish trusted partnerships with product heads, and executive level stakeholders Experience with AWS Experience with Kafka, MongoDB, Spring, Sybase or any RDBMS Experience in Financial Services or Fintech is a plus Goldman Sachs Engineering Culture At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here! © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity.
Posted 2 weeks ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Freshworks makes it fast and easy for businesses to delight their customers and employees. We do this by taking a fresh approach to building and delivering software that is affordable, quick to implement, and designed for the end user. Headquartered in San Mateo, California, Freshworks has a global team operating from 13 global locations to serve more than 65,000 companies -- from startups to public companies – that rely on Freshworks software-as-a-service to enable a better customer experience (CRM, CX) and employee experience (ITSM). Freshworks’ cloud-based software suite includes Freshdesk (omni-channel customer support), Freshsales (sales automation), Freshmarketer (marketing automation), Freshservice (IT service desk), Freshchat (AI-powered bots), supported by Neo, our underlying platform of shared services. Freshworks is featured in global national press including CNBC, Forbes, Fortune, Bloomberg and has been a BuiltIn Best Place to work in San Francisco and Denver for the last 3 years. Our customer ratings have earned Freshworks products TrustRadius Top Rated Software ratings and G2 Best of Awards for Best Feature Set, Best Value for the Price and Best Relationship. Job Description SRE team includes expert Software and System engineers who are custodians of Availability, Scalability and Performance of the SaaS products. We build tools and frameworks to monitor, load test and sometimes build full platform features that other products' use. We undertake architecture reviews and help the individual product teams to identify performance bottlenecks. We tend to look at the application from a system perspective bottom up rather than top-down. Our engineers have the freedom to pick the challenges that they work on and own the task to completion. ● Design, write, and deliver software to improve the availability, latency, and efficiency of Freshwork’s Products & Platforms. ● Manage availability, latency and performance of mission critical services and build automation to prevent problem recurrence. ● Independently determine and develop architectural approaches and Infrastructure solutions. ● Define strategy, vision, and roadmap to develop CI/CD, Application hosting, Security and Compliance standards and guidelines across Freshworks. ● Drive blameless postmortems for large scale incidents. ● Define and drive automation and orchestration strategies. ● Strategize cost optimization across Freshworks Cloud environment. Qualifications 4-7 Years of experience with strong programming skills in Python, Go, or Bash for building infrastructure tooling and automation frameworks. Extensive hands-on experience with relational databases (e.g., MySQL, PostgreSQL, SQL Server) and distributed NoSQL systems (e.g., Cassandra, MongoDB, DynamoDB). Proven track record of designing and operating databases in large-scale cloud-native environments (AWS, GCP, Azure). Expertise with Infrastructure as Code (Terraform, Helm, Ansible) and Kubernetes for managing production database systems. Deep knowledge of database replication, clustering, backup/restore, and failover techniques. Advanced experience with observability tooling (Prometheus, Grafana, Datadog, New Relic) for monitoring distributed databases. Strong communication skills and ability to influence across teams and levels. Additional Information At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business.
Posted 2 weeks ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Requirements Description and Requirements Position Summary We are seeking a forward-thinking and enthusiastic Engineering and Operations Specialist to manage and optimize our MongoDB and Splunk platforms. The ideal candidate will have in-depth experience in at least one of these technologies, with a preference for experience in both. Job Responsibilities Worked with engineering and operational tasks for MongoDB and Splunk platforms, ensuring high availability and stability. Continuously improve the stability of the environments, leveraging automation, self-healing mechanisms, and AIOps. Develop and implement automation using technologies such as Ansible, Python, Shell. Manage CI/CD deployments and maintain code repositories. Utilize Infrastructure/Configuration as Code practices to streamline processes. Work closely with development teams to integrate database and observability/logging tools effectively Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex MongoDB databases version (6.0,7.0 ,8.0 and above) on Linux OS on (on-premises, cloud-based). Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and implement best Database and infrastructure security to meet the compliance. Monitor and tune MongoDB and Splunk clusters for optimal performance, identifying bottlenecks and troubleshooting issues. Analyze database queries, indexing, and storage to ensure minimal latency and maximum throughput. The Senior Splunk System Administrator will build, maintain, and standardize the Splunk platform, including forwarder deployment, configuration, dashboards, and maintenance across Linux OS . Able to debug production issues by analyzing the logs directly and using tools like Splunk. Work in Agile model with the understanding of Agile concepts and Azure DevOps. Learn new technologies based on demand and help team members by coaching and assisting. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. MongoDB Certified DBA or Splunk Certified Administrator is a plus Experience with cloud platforms like AWS, Azure, or Google Cloud. Experience (In Years) 7+ Years Total IT experience & 4+ Years relevant experience in MongoDB and working experience Splunk Administrator Technical Skills In-depth experience with either MongoDB or Splunk, with a preference for exposure to both. Strong enthusiasm for learning and adopting new technologies. Experience with automation tools like Ansible, Python and Shell. Proficiency in CI/CD deployments, DevOps practices, and managing code repositories. Knowledge of Infrastructure/Configuration as Code principles. Developer experience is highly desired. Data engineering skills are a plus. Experience with other DB technologies and observability tools are a plus. Extensive work experience Managed and optimized MongoDB databases, designed robust schemas, and implemented security best practices, ensuring high availability, data integrity, and performance for mission-critical applications. Working experience in database performance tuning with MongoDB tools and techniques. Management of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints Extensive experience in Database Backup and recovery strategy by design, configuration and implementation using backup tools (Mongo dump, Mongo restore) and Rubrik. Extensive experience in Configuration and enforced SSL/TLS encryption for secure communication between MongoDB nodes Working experience to Configure and maintain Splunk environments, developed dashboards, and implemented log management solutions to enhance system monitoring and security across Linux OS. Experience Splunk migration and upgradation on Standalone Linux OS and Cloud platform is plus. Perform application administration for a single security information management system using Splunk. Working knowledge of Splunk Search Processing Language (SPL), architecture and various components (indexer, forwarder, search head, deployment server) Extensive experience in both MongoDB database and Splunk replication between Primary and Secondary servers to ensure high availability and fault tolerance. Managed Infrastructure security policy as per best industry standard by designing, configurating and implementing privileges and policy on database using RBAC as well as Splunk. Scripting skills and automation experience using DevOps, Repos and Infrastructure as code. Working experience in Container (AKS and OpenShift) is plus. Working experience in Cloud Platform experience (Azure, Cosmos DB) is plus. Strong knowledge in ITSM process and tools (ServiceNow). Ability to work 24*7 rotational shift to support the Database and Splunk platforms. Other Critical Requirements Strong problem-solving abilities and proactive approach to identifying and resolving issues. Excellent communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities effectively. About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!
Posted 2 weeks ago
3.0 years
0 Lacs
India
Remote
Job Title - AI/Gen AI Full Stack Engineer Location : Bangalore, India (Preferred) / Remote Job Type : Full-Time About Futurepath AI Futurepath.ai is a leading AI platform startup growing at a fast pace, serving multiple Fortune 500 clients. Backed by prominent investors such as Village Global, we specialize in delivering cutting-edge AI solutions that empower businesses to tackle complex challenges and redefine possibilities. Join our innovative team and be a part of transforming industries with the power of AI and Generative AI technologies. Job Summary We are seeking a highly skilled and innovative AI/GenAI Engineer to join our team. In this critical role, you will focus on leveraging Large Language Models (LLMs) and Small Language Models (SLMs) to solve complex business challenges. Your expertise will drive the building of applications powered by these models, focusing on cost-efficiency, response quality, and latency for specific use cases. This role offers the opportunity to work at the forefront of AI technologies and contribute to the next phase of growth at Futurepath.ai. Responsibilities LLM and SLM Optimization : Design and develop scalable solutions using Large Language Models and Small Language Models for specific business use cases. Optimize LLM- and SLM-powered applications to balance cost, quality of response, and latency. Generative AI Full Stack Applications : Develop innovative applications leveraging generative AI techniques, including natural language generation, summarization, and content creation. Create tailored Small Language Models to address domain-specific challenges efficiently. Solve challenges related to ethical AI, bias mitigation, and explainability in generative AI solutions. Research and Development : Experiment with cutting-edge AI technologies and frameworks to enhance the functionality and efficiency of LLMs and SLMs. Stay updated on the latest advancements in generative AI and their practical applications. Contribute to the development of proprietary models and algorithms when necessary. Performance Monitoring and Continuous Improvement : Monitor and evaluate the performance of AI solutions, identifying areas for improvement. Implement iterative development cycles to refine and enhance the effectiveness of LLM and SLM applications. Requirements Education : Bachelor’s or Master’s degree in Computer Science, AI, Machine Learning, or related fields. Experience : 3+ years of experience in Full Stack Development. Proven track record of deploying and building LLM-based and SLM-based applications in production environments. Technical Skills : Expertise in working with popular LLMs (e.g., OpenAI GPT, Anthropic, LLaMA) and using SLMs for specific use cases. Strong proficiency in Python. Familiarity with APIs and cloud platforms (AWS, Azure, GCP) for LLM and SLM integration. Familiarity with frontend development using ReactJS Problem-Solving Skills : Deep understanding of trade-offs between cost, quality of response, and latency in AI-powered applications. Ability to troubleshoot and resolve challenges related to scalability, efficiency, and performance in AI systems.| Soft Skills : Strong communication skills to explain complex AI concepts to technical and non-technical stakeholders. Collaborative mindset with the ability to work in cross-functional teams. Proactive and self-motivated with a passion for pushing the limits of AI technologies. What We Offer Competitive salary and equity options. Opportunity to work on cutting-edge AI and Generative AI technologies, including LLMs and SLMs. Flexible work environment with remote options. Professional growth opportunities in a high-impact, investor-backed startup. Collaborative and innovative work culture. How to Apply If you are ready to lead the charge in building innovative AI and generative AI applications and optimizing LLM/SLM-powered solutions, send your resume and a portfolio of relevant work to aditi@futurepath.ai. Please include a cover letter detailing your experience and how you envision contributing to Futurepath.ai’s mission.
Posted 2 weeks ago
9.0 - 12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position Overview Job Title: Senior Engineer - SRE, AVP Location: Pune, India Corporate Title: AVP Role Description Site reliability engineers create a bridge between development and operations by applying a software engineering mindset to system administration topics. As an SRE at Deutsche Bank, you will play a pivotal role in ensuring the reliability, scalability, and performance of our systems. You will collaborate closely with feature and cross-functional teams to design, build, and maintain robust and efficient systems, applying cutting-edge technologies and best practices. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Proven experience leading and scaling Production/SRE teams in a high-growth environment. Maintain services once they are live by measuring and monitoring availability, latency, and the overall system health. Identify, design, develop, deploy tools and processes to monitor, maintain, and report site performance and availability. Streamlining repetitive tasks for automation using Ansible, Shell Script, and Java; monitoring server health using Python and Shell-script; implementing Business Continuity/Disaster Recovery plans for end-to-end application support processes. Conducting build and configuration using release management tools, including BitBucket and Teamcity; utilizing release management and incident tracking tools, including ServiceNow to track incidents and work items and their progress. Leveraging SQL Server and Oracle databases, Linux OS, Java, and OpenShift to perform analysis of issues and resolve incidents; and setting up and maintaining monitoring of Non-Functional Requirements (NFRs) to monitor overall quality, availability, response time, security and reliability of applications using Geneos, Prometheus, and Grafana. Develops routines to deploy CIs to the target environments. Provides Release Deployments on non-Production Management controlled environments. Capture Build and Deployment notes, develop Software Product Deployment & Operating Instructions. Provide Level 3 support for technical infrastructure components (e.g. databases, middleware and user interfaces). Perform problem and root cause analysis for application production incidents and delivers the necessary resolution pack (i.e. hotfixes, patches). Provide L3 Support and remediation on any issues pertaining to the above applications by providing detailed code analysis of applications’ production platform. Remediate incidents and outages pertaining to the platform. Conduct regularly scheduled Problem Management meetings with IT Product Managers (ITPMs), infrastructure groups, problem managers and incident managers to track progress and highlight issues. Your Skills And Experience E xperience Required - 9 to12 Years Hand-on Experience in UNIX, scripting (Shell, Perl) Hand-on Experience in various communication Protocols (AS2, HTTPS, File Transfer Protocol Secured(FTPS), RFCs, SNC, MQ etc.) Hand-on Experience with Webserver (Apache) implementation and configuration Hand-on Experience with Application server (WebLogic) implementation and configuration Hands on experience with OpenShift Fabric, tomcat, Wildfly configuration Hands on experience with Geneos, Control M, Airflow, GCP landing zone configuration Hands on experience with TeamCity, Jenkin, udeploy, CI-CD pipeline setup Hand-on Experience in Oracle PL SQL Good understanding on Core Java Hand-on Knowledge on handling Industry standard financial transaction related file formats Hand-on Knowledge on various compression, encryption techniques like SSL etc., and Secured Shell (SSH) authentication Excellent communication and influencing skills. Education/Qualifications Degree from an accredited college or university with a concentration in Engineering or Computer Science How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 2 weeks ago
7.0 years
0 Lacs
Pune, Maharashtra, India
Remote
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it, our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage and passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Endpoint Management Specialist The role of an Endpoint Management Specialist is a senior technical expert responsible for resolving complex issues related to Windows operating systems, endpoint management, and network connectivity. This role involves advanced diagnostics, proactive optimization, and automation to ensure high performance, security, and reliability across enterprise endpoints. What you'll do: Advanced Diagnostics and Troubleshooting Perform deep analysis and troubleshooting of Windows operating systems and applications using advanced diagnostic tools and utilities. Investigate and resolve escalated incidents related to system performance, stability, and security. Correlate system performance metrics with network telemetry to identify and resolve root causes of complex hybrid issues. Design, implement, and maintain automated device provisioning and policy management solutions. Manage application deployment, patch management, and configuration settings across all endpoints. Enforce security policies, compliance standards, and endpoint protection measures. Endpoint Network Troubleshooting Diagnose and resolve connectivity, latency, and performance issues on endpoints using standard network diagnostic tools. Validate network configurations and collaborate with other technical teams to resolve network-related issues. Analyze network interface metrics and monitor for anomalies, bandwidth congestion, or security threats. Performance Monitoring and Optimization Monitor and optimize the performance of Windows environments and network endpoints. Implement best practices for system tuning, resource utilization, and proactive maintenance. Documentation and Knowledge Sharing Create and maintain detailed technical documentation and runbooks. Mentor junior team members and provide training on advanced diagnostics and management practices. Collaboration and Leadership Work closely with cross-functional teams to drive continuous improvement and innovation. Lead complex projects and ensure adherence to established processes and standards. What you'll bring: Expertise in Windows operating system diagnostics, performance monitoring, and troubleshooting. Strong experience with endpoint management solutions, automated device provisioning, and policy management. Proficiency in network protocol analysis and troubleshooting methodologies. Experience with network monitoring platforms and automated diagnostic scripts. Strong scripting skills for automation and customization. Proven ability to resolve high-level technical issues and perform root cause analysis. Excellent communication, documentation, and mentoring skills. Bachelor’s degree in Computer Science, Information Technology, or related field (or equivalent experience). 7+ years of experience in advanced Windows support, endpoint management, or related engineering roles. Relevant industry certifications are highly desirable. Perks & Benefits: ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel: Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application: Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At: www.zs.com
Posted 2 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview: We are seeking an Embedded AI Software Engineer with deep expertise in writing software for resource-constrained edge hardware. This role is critical to building optimized pipelines that leverage media encoders/decoders, hardware accelerators, and AI inference runtimes on platforms like NVIDIA Jetson, Hailo, and other edge AI SoCs. You will be responsible for developing highly efficient, low-latency modules that run on embedded devices, involving deep integration with NVIDIA SDKs (Jetson Multimedia, DeepStream, TensorRT) and broader GStreamer pipelines. Key Responsibilities: Media Pipeline & AI Model Integration Implement hardware-accelerated video processing pipelines using GStreamer, V4L2, and custom media backends. Integrate AI inference engines using NVIDIA TensorRT, DeepStream SDK, or similar frameworks (ONNX Runtime, OpenVINO, etc.). Profile and optimize model loading, preprocessing, postprocessing, and buffer management for edge runtime. System-Level Optimization Design software within strict memory, compute, and power budgets specific to edge hardware. Utilize multimedia capabilities (ISP, NVENC/NVDEC) and leverage DMA, zero-copy mechanisms where applicable. Implement fallback logic and error handling for edge cases in live deployment conditions. Platform & Driver-Level Work Work closely with kernel modules, device drivers, and board support packages to tune performance. Collaborate with hardware and firmware teams to validate system integration. Contribute to device provisioning, model updates, and boot-up behavior for AI edge endpoints. Required Skills & Qualifications: Educational Background: Bachelor’s or Master’s degree in Computer Engineering, Electronics, Embedded Systems, or related fields. Professional Experience: 2–4 years of hands-on development for edge/embedded systems using C++ (mandatory). Demonstrated experience with NVIDIA Jetson or equivalent edge AI hardware platforms. Technical Proficiency: Proficient in C++11/14/17 and multi-threaded programming. Strong understanding of video codecs, media IO pipelines, and encoder/decoder frameworks. Experience with GStreamer, V4L2, and multimedia buffer handling. Familiarity with TensorRT, DeepStream, CUDA, and NVIDIA’s multimedia APIs. Exposure to other runtimes like HailoRT, OpenVINO, or Coral Edge TPU SDK is a plus. Bonus Points Familiarity with build systems (CMake, Bazel), cross-compilation, and Yocto. Understanding of AI model quantization, batching, and layer fusion for performance. Prior experience working with camera bring-up, video streaming, and inference on live feeds. Contact Information: To apply, please send your resume and portfolio details to hire@condor-ai.com with “Application: Embedded AI Software Engineer” in the subject line. About Condor AI: Condor is an AI engineering company where we use artificial intelligence models to deploy solutions in the real world. Our core strength lies in Edge AI, combining custom hardware with optimized software for fast, reliable, on device intelligence. We work across smart cities, industrial automation, logistics, and security, with a team that brings over a decade of experience in AI, embedded systems, and enterprise grade solutions. We operate lean, think globally, and build for production from system design to scaled deployment.
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join us in bringing joy to customer experience. Five9 is a leading provider of cloud contact center software, bringing the power of cloud innovation to customers worldwide. Living our values everyday results in our team-first culture and enables us to innovate, grow, and thrive while enjoying the journey together. We celebrate diversity and foster an inclusive environment, empowering our employees to be their authentic selves. The Team The Voice Client team delivers crucial voice call capabilities for our customer call/contact centre solutions. Our focus is on smooth and high audio quality to enhance the customer experience. This position is based out of one of the offices of our affiliate Acqueon Technologies in India, and will adopt the hybrid work arrangements of that location. You will be a member of the Acqueon team with responsibilities supporting Five9 products, collaborating with global teammates based primarily in the United States. Responsibilities: Interface directly with customer’s telecom engineers and IT teams to deploy customized solutions and trouble shoot issues Assist in building customer and carrier SIP Trunk connectivity, including interop sessions and activations Assist in the day-to-day operational support of the Telecommunications network, analyzing problems affecting network availability and customer quality reports Escalation point of contact for the Network Operations Center to resolve critical alerts generated by the SBC’s Analyze history of telecommunication related incidents and perform preventive measures Assist in the day-to-day operation of the telecommunications network, where necessary analyzing problems affecting network availability and customer/vendor service quality Provide root-cause analyses on service outages Manage Telecommunications Service Provider & Vendors Implement hardware and software deployments on the telephony network as required; deploy new services including interop testing with telecom carriers and customers Manage telecommunications Service Providers & Vendors Implement hardware and software deployments on the telephony network as required Deploy new services including interop testing with Telecom carriers and customers Develop and implement testing plans Create technical documentation and Standard Operating Procedures (SOPs) for daily/weekly recurring tasks or change requests Providing root-cause of issues and suggesting & deploying resolutions for serious problems Providing direction to junior team members in troubleshooting and managing complex service issues Understand product capabilities and limitations Qualifications: 5+ years of telecom engineer experience with VoIP/SIP voice applications Must have a detailed, working and theoretical knowledge of voice and data communications, including traditional switching, signaling and routing systems to include SIP, TCP/IP, MPLS etc. High level knowledge of VoIP principles, protocols and CODECs such as H.248, SIP, G.711, G.729, WebRTC, MPLS, VPN, UDP, RTP, MTP etc. High level knowledge and experience with SIP call routing, Least Cost Routing, Security Controls (TLS, IPSEC, ACLs) with Session Border Controllers Ability to analyze and design voice systems to achieve stable, efficient and secure operation Experience with International Routing including ITFS, iDID’s and local termination policies Ability to build, interop and maintain direct connect SIP Trunks to customers and carriers Review, Assess, Critique and Implement Telecom Architecture design changes in Lab and Production environments Excellent organizational and follow-through skill sets are essential Document, troubleshoot and resolve a multifaceted and complex Global voice network Maintain the telephony environment to assure delivery of Cloud applications, voice and customer connectivity and availability to target 99.99% SLA Experience with Softswitch, Session Border Controllers (preferably Sonus/Ribbon, Audio Codes), SIP proxies, Media Servers (preferably AudioCodes IPM-6310 and FreeSWITCH) In depth knowledge of Wireshark, Empirix or other network protocol analyzers Ability to capture and analyze RTP streams as related to voice quality traditional KPI’s such as MOS, Jitter, Latency, Packets Loss Five9 embraces diversity and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better we are. Five9 is an equal opportunity employer. View our privacy policy, including our privacy notice to California residents here: https://www.five9.com/pt-pt/legal. Note: Five9 will never request that an applicant send money as a prerequisite for commencing employment with Five9.
Posted 2 weeks ago
7.0 years
0 Lacs
Greater Kolkata Area
Remote
About Workato Workato transforms technology complexity into business opportunity. As the leader in enterprise orchestration, Workato helps businesses globally streamline operations by connecting data, processes, applications, and experiences. Its AI-powered platform enables teams to navigate complex workflows in real-time, driving efficiency and agility. Trusted by a community of 400,000 global customers, Workato empowers organizations of every size to unlock new value and lead in today’s fast-changing world. Learn how Workato helps businesses of all sizes achieve more at workato.com. Why join us? Ultimately, Workato believes in fostering a flexible, trust-oriented culture that empowers everyone to take full ownership of their roles . We are driven by innovation and looking for team players who want to actively build our company. But, we also believe in balancing productivity with self-care . That’s why we offer all of our employees a vibrant and dynamic work environment along with a multitude of benefits they can enjoy inside and outside of their work lives. If this sounds right up your alley, please submit an application. We look forward to getting to know you! Also, Feel Free To Check Out Why Business Insider named us an “enterprise startup to bet your career on” Forbes’ Cloud 100 recognized us as one of the top 100 private cloud companies in the world Deloitte Tech Fast 500 ranked us as the 17th fastest growing tech company in the Bay Area, and 96th in North America Quartz ranked us the #1 best company for remote workers Responsibilities We are looking for an exceptional Senior Infrastructure Engineer with experience in building high-performing, scalable, enterprise-grade applications . to join our growing team. In this role, you will be responsible for building a high-performance queuing/storage engine. You will work in a polyglot environment where you can learn new languages and technologies whilst working with an enthusiastic team. You will also be responsible for: Software Engineering Design and develop high-volume, low-latency applications for mission-critical systems and deliver high availability and performance Contribute to all phases of the development life cycle Write well-designed, testable, efficient code Evaluate and propose improvements to existing system Support continuous improvement by investigating alternatives and technologies and presenting these for architectural review Infrastructure Engineering Maintain and evolve application cloud infrastructure (AWS) Maintain and evolve Kubernetes clusters Infrastructure hardening according to compliance and security requirements Maintenance and development of monitoring, logging, tracing, alerting solutions OpenSearch Expertise Experience scaling OpenSearch clusters to handle heavy query and indexing workloads, including optimizing bulk indexing operations and query throughput Proficiency in implementing and managing effective sharding strategies to balance performance, storage, and recovery needs Advanced knowledge of OpenSearch performance tuning, including JVM settings, field mappings, and cache optimization Expertise in designing robust disaster recovery solutions with cross-cluster replication, snapshots, and restoration procedures Experience implementing and optimizing vector search capabilities for ML applications, including k-NN algorithms and approximate nearest neighbor (ANN) search Knowledge of custom OpenSearch plugin development for specialized indexing or query requirements Hands-on experience deploying and managing self-hosted OpenSearch clusters in Kubernetes environments Familiarity with monitoring OpenSearch performance metrics and implementing automated scaling solutions Requirements Qualifications / Experience / Technical Skills BS/MS degree in Computer Science, Engineering or a related subject 7+ years of industry experience Experience of working with public cloud infrastructure providers (AWS/Azure/Google Cloud) Experience with Terraform, Docker A hands-on approach to implementing solutions Good understanding of Linux networking and security Exceptional understanding of Kubernetes concepts Experience with Golang/Python/Java/Ruby (any) and databases such as PostgreSQL Contributions to open source projects is a plus Soft Skills / Personal Characteristics Communicate in English with colleagues and customers
Posted 2 weeks ago
7.0 years
0 Lacs
Greater Kolkata Area
Remote
About Workato Workato transforms technology complexity into business opportunity. As the leader in enterprise orchestration, Workato helps businesses globally streamline operations by connecting data, processes, applications, and experiences. Its AI-powered platform enables teams to navigate complex workflows in real-time, driving efficiency and agility. Trusted by a community of 400,000 global customers, Workato empowers organizations of every size to unlock new value and lead in today’s fast-changing world. Learn how Workato helps businesses of all sizes achieve more at workato.com. Why join us? Ultimately, Workato believes in fostering a flexible, trust-oriented culture that empowers everyone to take full ownership of their roles . We are driven by innovation and looking for team players who want to actively build our company. But, we also believe in balancing productivity with self-care . That’s why we offer all of our employees a vibrant and dynamic work environment along with a multitude of benefits they can enjoy inside and outside of their work lives. If this sounds right up your alley, please submit an application. We look forward to getting to know you! Also, Feel Free To Check Out Why Business Insider named us an “enterprise startup to bet your career on” Forbes’ Cloud 100 recognized us as one of the top 100 private cloud companies in the world Deloitte Tech Fast 500 ranked us as the 17th fastest growing tech company in the Bay Area, and 96th in North America Quartz ranked us the #1 best company for remote workers Responsibilities We are looking for an exceptional Senior Infrastructure Engineer with experience in building high-performing, scalable, enterprise-grade applications . to join our growing team. In this role, you will be responsible for building a high-performance queuing/storage engine. You will work in a polyglot environment where you can learn new languages and technologies whilst working with an enthusiastic team. You will also be responsible for: Software Engineering Design and develop high-volume, low-latency applications for mission-critical systems and deliver high availability and performance Contribute to all phases of the development life cycle Write well-designed, testable, efficient code Evaluate and propose improvements to existing system Support continuous improvement by investigating alternatives and technologies and presenting these for architectural review Infrastructure Engineering Maintain and evolve application cloud infrastructure (AWS) Maintain and evolve Kubernetes clusters Infrastructure hardening according to compliance and security requirements Maintenance and development of monitoring, logging, tracing, alerting solutions Requirements Qualifications / Experience / Technical Skills BS/MS degree in Computer Science, Engineering or a related subject 7+ years of industry experience Experience of working with public cloud infrastructure providers (AWS/Azure/Google Cloud) Experience with Terraform, Docker A hands-on approach to implementing solutions Good understanding of Linux networking and security Exceptional understanding of Kubernetes concepts Experience with Golang/Python/Java/Ruby (any) and databases such as PostgreSQL Contributions to open source projects is a plus Soft Skills / Personal Characteristics Communicate in English with colleagues and customers
Posted 2 weeks ago
3.0 years
0 Lacs
Greater Chennai Area
On-site
Description This role is for the AFT Inbound Foundations and Routing team (IBFR) team which is responsible to build core software components/services that orchestrate the movement of inventory within a warehouse and interfacing with sortation and SCOT systems for high fidelity promise and planning decisions. Worldwide, the IBFR team supports 570+ FCs across NA, EU and JP regions. With rapid expansion into new geographies, innovations in supply chain, delivery models and customer experience, increasingly complex transportation network, ever expanding selection of products and growing number of shipments worldwide, we have an opportunity to build software that scales the business, leads the industry through innovation and delights millions of customers worldwide. We have challenging problems (both business and technical) that leverages new technologies that support our high volume, low latency and high availability services. If you are looking for an opportunity to solve deep technical problems and build innovative solutions in a fast paced environment working with smart, passionate software developers, this might be the role for you. A successful candidate for this position will be able to build new software from the ground up, create pragmatic solutions for complex business problems, enjoy working closely with operations staff in Amazon fulfillment centers around the world. Key job responsibilities 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience 3+ years of Video Games Industry (supporting title Development, Release, or Live Ops) experience Experience programming with at least one software programming language Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A2879002
Posted 2 weeks ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Engineering at Innovaccer With every line of code, we accelerate our customers' success, turning complex challenges into innovative solutions. Collaboratively, we transform each data point into valuable insights. Join us and be part of a team that's turning the vision of better healthcare into reality—one line of code at a time. Together, we're shaping the future and making a meaningful impact on the world. About The Role Technology that once promised to simplify patient care has, in many cases, created more complexity. At Innovaccer, we tackle this challenge by leveraging the vast amount of healthcare data available and replacing long-standing issues with intelligent, data-driven solutions. Data is the foundation of our innovation. We are looking for a Senior AI Engineer who understands healthcare data and can build algorithms that personalize treatments based on a patient's clinical and behavioral history. This role will help define and build the next generation of predictive analytics tools in healthcare. A Day in the Life Design and build scalable AI platform architecture to support ML development, agentic frameworks, and robust self-serve AI pipelines Develop agentic frameworks and a catalog of AI agents tailored for healthcare use cases Design and deploy high-performance, low-latency AI applications Build and optimize ML/DL models, including generative models like Transformers and GANs Construct and manage data ingestion and transformation pipelines for scalable AI solutions Conduct experiments, statistical analysis, and derive insights to guide development Collaborate with data scientists, engineers, product managers, and business stakeholders to translate AI innovations into real-world applications Partner with business leaders and clients to understand pain points and co-create scalable AI-driven solutions Experience with Docker, Kubernetes, AWS/Azure Preferred Skills Proficient in Python for building scalable, high-performance AI applications LLM optimization and deployment at scale Requirements What You Need 3+ years of software engineering experience with strong API development skills 3+ years of experience in data science, including at least 1+ year building generative AI pipelines, agents, and RAG systems Strong Python programming skills, particularly in enterprise application development and optimization Frameworks like LangChain Vector databases Embedding models and Retrieval-Augmented Generation (RAG) design Familiarity with at least one ML platform Benefits Here's What We Offer Generous Leave Policy: Up to 40 days of leave annually Parental Leave: One of the industry's best parental leave policies Sabbatical Leave: Take time off for upskilling, research, or personal pursuits Health Insurance: Comprehensive coverage for you and your family Pet-Friendly Office*: Bring your furry friends to our Noida office Creche Facility for Children*: On-site care at our India offices Pet-friendly and creche facilities are available at select locations only (e.g., Noida for pets)
Posted 2 weeks ago
0 years
0 Lacs
Sahibzada Ajit Singh Nagar, Punjab, India
On-site
Apptunix is a leading Mobile App & Web Solutions development agency, based out of Texas, US. The agency empowers cutting-edge startups & enterprise businesses, paving the path for their incremental growth via technology solutions. Established in mid-2013, Apptunix has since then engaged in elevating the client’s interests & satisfaction through rendering improved and innovative Software and Mobile development solutions. The company strongly comprehends business needs and implements them by merging advanced technologies with its seamless creativity. Apptunix currently employs 250+ in-house experts who work closely & dedicatedly with clients to build solutions as per their customers' needs. Come, transform with us! Roles and Responsibilities: The candidate should have following skill sets : Deep Experience working on Node.js Understanding of SQL and NoSQL database systems with their pros and cons Experience working with databases like MongoDB. Solid Understanding of MVC and stateless APIs & building RESTful APIs Should have experience and knowledge of scaling and security considerations Integration of user-facing elements developed by front-end developers with server-side logic Good experience with ExpressJs, MongoDB, AWS S3 and ES6 Writing reusable, testable, and efficient code Design and implementation of low-latency, high-availability, and performance applications Implementation of security and data protection Integration of data storage solutions and Database structure Experience Required - Five or more years Must Have's: Ability to work within a team and independently. Work From office
Posted 2 weeks ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Seeking an experienced Embedded Linux Test Engineer to validate and quality-assure Yocto‑based Linux BSP across diverse SoCs (e.g., QCS6490, QRB5165, QCS8550). The ideal candidate will design and execute comprehensive test plans, drive development of test infrastructure, and collaborate with firmware/kernel teams to ensure robust, reliable SoC platform support. Key Responsibilities Develop test plans and test cases for system, integration, and regression testing on mobile and IoT-class SoCs (e.g., camera, multimedia, networking, connectivity). Flash and boot Yocto-generated images (e.g., qcom-multimedia-image, real-time variants) on hardware evaluation kits. Validate key subsystems: bootloader, kernel, drivers (Wi‑Fi, Bluetooth, camera, display), power management, real-time functionality. Build and maintain automation frameworks: kernel image deployment, logging, instrumentation, hardware reset, network interfaces. Track and report software/hardware defects; work with cross-functional engineering teams to triage and resolve issues. Analyze system logs, trace output, measure boot/latency, resource utilization and performance metrics. Maintain test infrastructure and CI pipelines, ensuring reproducibility and efficiency. Contribute to documentation: test reports, acceptance criteria, qualification artifacts, and release summaries Mandatory Skills Strong C/C++ & scripting (Python, Bash) Yocto & BitBake workflows, experience building BSPs and flashing images on development boards Linux kernel internals, drivers, real-time patches Experience with Qualcomm SoCs or similar ARM platforms; Hands-on knowledge of QCS/QRB platforms and multimedia pipelines Hardware bring-up, serial consoles, bootloader debugging (U-Boot) GitLab/ Jenkins / Buildbot, hardware-triggered automation Performance analysis and profiling tools Ability to measure boot time, trace latency, optimize kernel subsystems Nice-to-Have Skills Experience debugging multimedia subsystems (camera, display, audio, video pipelines). Familiarity with Debian/Ubuntu-based host build environments. Knowledge of Qualcomm-specific test tools and manifest workflows (e.g., meta-qcom-realtime, qcom-manifest) Prior work in IoT/robotics, real-time or safety-critical embedded platforms. Exposure to certification/regulatory testing (e.g., FCC, Bluetooth SIG, Wi‑Fi Alliance).
Posted 2 weeks ago
0 years
7 - 8 Lacs
Hyderābād
Remote
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Lead Consultant - ML/CV Ops Engineer ! We are seeking a highly skilled ML CV Ops Engineer to join our AI Engineering team. This role is focused on operationalizing Computer Vision models—ensuring they are efficiently trained, deployed, monitored , and retrained across scalable infrastructure or edge environments. The ideal candidate has deep technical knowledge of ML infrastructure, DevOps practices, and hands-on experience with CV pipelines in production. You’ll work closely with data scientists, DevOps, and software engineers to ensure computer vision models are robust, secure, and production-ready always. Key Responsibilities: End-to-End Pipeline Automation: Build and maintain ML pipelines for computer vision tasks (data ingestion, preprocessing, model training, evaluation, inference). Use tools like MLflow , Kubeflow, DVC, and Airflow to automate workflows. Model Deployment & Serving: Package and deploy CV models using Docker and orchestration platforms like Kubernetes. Use model-serving frameworks (TensorFlow Serving, TorchServe , Triton Inference Server) to enable real-time and batch inference. Monitoring & Observability: Set up model monitoring to detect drift, latency spikes, and performance degradation. Integrate custom metrics and dashboards using Prometheus, Grafana, and similar tools. Model Optimization: Convert and optimize models using ONNX, TensorRT , or OpenVINO for performance and edge deployment. Implement quantization, pruning, and benchmarking pipelines. Edge AI Enablement (Optional but Valuable): Deploy models on edge devices (e.g., NVIDIA Jetson, Coral, Raspberry Pi) and manage updates and logs remotely. Collaboration & Support: Partner with Data Scientists to productionize experiments and guide model selection based on deployment constraints. Work with DevOps to integrate ML models into CI/CD pipelines and cloud-native architecture. Qualifications we seek in you! Minimum Qualifications Bachelor’s or Master’s in Computer Science , Engineering, or a related field. Sound experience in ML engineering, with significant work in computer vision and model operations. Strong coding skills in Python and familiarity with scripting for automation. Hands-on experience with PyTorch , TensorFlow, OpenCV, and model lifecycle tools like MLflow , DVC, or SageMaker. Solid understanding of containerization and orchestration (Docker, Kubernetes). Experience with cloud services (AWS/GCP/Azure) for model deployment and storage. Preferred Qualifications: Experience with real-time video analytics or image-based inference systems. Knowledge of MLOps best practices (model registries, lineage, versioning). Familiarity with edge AI deployment and acceleration toolkits (e.g., TensorRT , DeepStream ). Exposure to CI/CD pipelines and modern DevOps tooling (Jenkins, GitLab CI, ArgoCD ). Contributions to open-source ML/CV tooling or experience with labeling workflows (CVAT, Label Studio). Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 16, 2025, 3:14:00 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 2 weeks ago
0 years
0 Lacs
Hyderābād
On-site
Job Requirements Architect & Lead Storage Subsystem Development: Design and lead implementation of Linux-based storage stack for embedded or server platforms. Define architecture for storage interfaces (eMMC, UFS, NVMe, SATA, SD, USB mass storage, etc.). Optimize for performance, power, and reliability on target SoC or platform. Driver Development & Integration: Develop and maintain Linux kernel drivers for storage devices and controllers. Ensure upstream alignment with mainline Linux or maintain vendor-specific forks as needed. Integrate vendor storage controller IPs and firmware. File System & Block Layer Expertise: Work with Linux file systems (ext4, f2fs, xfs, btrfs). Optimize storage stack performance using IO schedulers, caching strategies, and tuning. Reliability, Data Integrity & Power Resilience: Implement support for journaling, wear leveling (especially for flash), secure erase, and TRIM. Ensure data integrity during power loss (power-fail robustness). Work with hardware teams on power rail sequencing and power management integration. Cross-Functional Collaboration: Coordinate with SoC vendors, QA, product management, and firmware/hardware teams. Collaborate with bootloader, security, and OTA (Over-The-Air) update teams for seamless storage handling. Debugging & Performance Analysis: Use tools like blktrace, iostat, fio, perf, strace, and kernel logs for performance and issue analysis. Root cause field issues (e.g., storage corruption, I/O latency) across layers. Compliance & Validation: Validate storage against JEDEC/UFS/SD/USB/NVMe standards. Ensure support for secure boot, encrypted storage (dm-crypt, LUKS), and SELinux/AppArmor policies where needed. Mentorship & Leadership: Lead and mentor a team of kernel and platform developers. Conduct code reviews and establish best practices for Linux storage development. Work Experience Kernel Programming: Strong knowledge of Linux storage subsystems (block layer, VFS, I/O stack). Proficiency in C and kernel debugging techniques. Storage Protocols & Interfaces: Hands-on with eMMC, UFS, NVMe, USB mass storage, SATA, SPI-NAND/NOR, SDIO, etc. Understanding of storage standards (SCSI, AHCI, NVMe spec, JEDEC). Filesystems: Deep knowledge of ext4, f2fs, and familiarity with log-structured or flash-optimized file systems. Performance & Tuning: Expertise in tuning I/O performance and handling flash-specific issues (latency, endurance, etc.). Tools: blktrace, iostat, fio, perf, gdb, crash, etc. Security: Secure storage handling, key management, dm-verity/dm-crypt, rollback protection. Yocto/Build Systems (optional but useful): Understanding of build flows for embedded Linux using Yocto or Buildroot.
Posted 2 weeks ago
4.0 years
25 - 50 Lacs
Hyderābād
Remote
About Rhythm Rhythm is redefining the future of remote cardiac monitoring. Our all-in-one platform combines advanced technology with a dedicated clinical support team to help practices streamline workflow, improve patient outcomes, and drive revenue — without adding administrative burden. We serve cardiology clinics, hospitals, and health systems across the U.S., providing unmatched reliability, service, and integration. Role Overview We're looking for a highly technical, hands-on Salesforce Platform Manager to own and operate our complex Salesforce environment. This role is responsible for building, maintaining, and evolving the platform's architecture, automations, integrations, and performance. You'll be the go-to technical expert across our Salesforce ecosystem—including Flow, Screen Flows, API integrations, Omni-Channel, permission sets, data models, and page layouts. You'll also play a critical role in managing overall platform health and scale: monitoring data caps, enforcing best practices, and ensuring efficient, stable performance. This position works closely with senior leadership in Operations and Engineering to align technical execution with high-level business needs. While the primary focus is building and maintaining, you'll also help shape near-term priorities and the enhancement roadmap. This is a hybrid role with employees required to work in our Hyderabad office every Tuesday and Wednesday. Key Responsibilities: Technical Platform Ownership Build and maintain all declarative automation: Flows, Screen Flows, Process Builder (if applicable), validation rules, etc. Manage Omni-Channel setup, routing rules, permissions, queues, and configuration. Create and update page layouts, Lightning components (configurable), and permission sets. Oversee system health: data model design, field usage, API limits, data storage thresholds, login/security settings. Identify and resolve technical debt and system inefficiencies proactively. Implement and manage sandbox environments and release cycles. Integration & Development Build and maintain integrations with external systems, including API-based interactions (e.g., AWS Lambdas, third-party platforms). Create and monitor outbound callouts and inbound API endpoints as needed. Work alongside Engineering to ensure Salesforce aligns with the broader systems architecture. Architecture & Performance Serve as the technical authority over Rhythm's Salesforce environment. Monitor platform limits (e.g., data storage, governor limits, API usage) and ensure scalability as the business grows. Plan and execute system upgrades, refactors, and architectural changes as needed to maintain platform performance and reliability. Cross-Functional Collaboration Work closely with VP of Operations and VP of Engineering to understand core needs and translate them into executable Salesforce solutions. Contribute to managing a short-term backlog of Salesforce improvements and prioritize technical execution. Support ad-hoc user and team needs through scalable, maintainable configurations. Qualifications: 4+ years of Salesforce experience , including building complex Flows, managing Omni-Channel, and handling large custom instances. Demonstrated ability to handle technical configuration and architecture independently . Strong understanding of Salesforce platform architecture , data modeling, and automation strategy. Hands-on experience with external API integrations , including creating and managing callouts. Familiarity with Apex , even if not writing code daily; should be able to read and debug basic Apex triggers and classes. Prior experience managing platform scale , including API usage, data caps, system latency, and user access. Skilled in creating and maintaining system documentation and technical specs. Comfortable working in a fast-paced, cross-functional environment with evolving needs. Certifications (strongly preferred): Salesforce Certified Administrator (required) Salesforce Certified Platform Developer I (preferred) Salesforce Certified Application Architect or System Architect (ideal) Highlights: Compensation : Salary range: ₹2,500,000 rupees – ₹3,500,000 (25-35 Lakhs) per annum Employment Type : Full-time Work Schedule : Standard business hours, Monday–Friday (EST hours preferred)
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Location Chennai Work from Office Experience Level 8 10 years Tier T2 We are seeking a highly skilled and experienced Senior Data Engineer to lead the design and development of scalable secure and high performance data pipelines hosted on a cloud platform The ideal candidate will have deep expertise in Databricks Data Fabric MDM Informatica and Unity Catalog and a strong foundation in data modelling software engineering and DevOps practices This role is critical to building a next generation healthcare data platform that will power advanced analytics operational efficiency and business innovation Key Responsibilities 1 Data Pipeline Design Development Translate business requirements into actionable technical specifications defining application components enhancement needs data models and integration workflows Design develop and optimize end to end data pipelines using Databricks and related cloud native tools Create and maintain detailed technical design documentation and provide accurate estimations for storage compute resources cost efficiency and operational readiness Implement reusable and scalable ingestion transformation and orchestration patterns for structured and unstructured data sources Ensure pipelines meet functional and non functional requirements such as latency throughput fault tolerance and scalability 2 Cloud Platform Architecture Build and deploy data solutions on Microsoft Azure Azure Fabric leveraging Data Lake Unity Catalog Integrate pipelines with Data Fabric and Master Data Management MDM platforms for consistent and governed data delivery Follow best practices in cloud security encryption access controls and identity management 3 Data Modeling Metadata Management Design robust and extensible data models supporting analytics AI ML and operational reporting Ensure metadata is cataloged documented and accessible through Unity Catalog and MDM frameworks Collaborate with data architects and analysts to ensure alignment with business requirements 4 DevOps CI CD Automation Adopt DevOps best practices for data pipelines including automated testing deployment monitoring and rollback strategies Work closely with platform engineers to manage infrastructure as code containerization and CI CD pipelines Ensure compliance with enterprise SDLC security and data governance policies 5 Collaboration Continuous Improvement Partner with data analysts and product teams to understand data needs and translate them into technical solutions Continuously evaluate and integrate new tools frameworks and patterns to improve pipeline performance and maintainability Key Skills Technologies Required Databricks Delta Lake Spark Unity Catalog Azure Data Platform Data Factory Data Lake Azure Functions Azure Fabric Unity Catalog for metadata and data governance Strong programming skills in Python SQL Experience with data modeling data warehousing and star snowflake schema design Proficiency in DevOps tools Git Azure DevOps Jenkins Terraform Docker Preferred Experience with healthcare or regulated industry data environments Familiarity with data security standards e g HIPAA GDPR
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Description NIQ is a global measurement and data analytics company that provides the most complete and trusted view available of consumers and markets worldwide. We provide consumer packaged goods manufacturers/fast-moving consumer goods and retailers with accurate, actionable information and insights and a complete picture of the complex and changing marketplace that companies need to innovate and grow. Our approach marries proprietary NIQ data with other data sources to help clients around the world understand what’s happening now, what’s happening next, and how to best act on this knowledge. We like to be in the middle of the action. That’s why you can find us at work in over 90 countries, covering more than 90% of the world’s population. Job Description Our NIQ Technology teams are working on our new “Discover” platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on NIQ’s data and insights to innovate and grow. Feeding data into the “Discover” platform are also a set of large scale applications that process millions of records of data every day. As an Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and prototyping cutting-edge technologies. Right now the platform for which we are hiring is based in Java, ExtJS (Sencha), Angular, Spring Boot, Tom Cat, Oracle, Elastic Search, Mongo, Azure Cloud and we continue to adopt the best of breed in cloud-native, low-latency technologies. We value CI/CD in everything that we develop. Our team is co-located and have adopted SAFE Agile, with central technology hubs in Chennai, Chicago, and Toronto. What You’ll Do Write code to develop scalable, flexible applications Ensure the maintainability and quality of code Develop secure and highly performant services and APIs Proactively identify and suggest improvements / new technologies that can improve the platform in terms of speed, quality and cost Qualifications We’re looking for people who have A Bachelor’s or Master’s degree in Computer Science or related field 6 to 18 months of software development experience Strong knowledge of data structures, algorithms Good knowledge and working experience on Java Good knowledge and experience with SQL (preferably Oracle) Excellent written and oral communication skills Experience in Angular and / or Ext JS an added advantage Eagerness to learn independently and flexibility to work across technology stacks Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 2 weeks ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
At Qube Cinema, technology and storytelling come together to create world-class cinematic experiences using In-Camera VFX (ICVFX) and real-time virtual production workflows. Our state-of-the-art LED volume stage is the heart of our production pipeline. We’re looking for a Volume Head to lead and manage this critical piece of our virtual production infrastructure. This is a senior, multidisciplinary leadership role requiring a deep understanding of real-time rendering, on-set production, LED volume systems, and collaborative team management. As Volume Head, you will be responsible for end-to-end operations of the LED volume stage — from technical setup and calibration to on-set execution, content playback, and real-time troubleshooting. You will work closely with Directors, DoPs, Production, VAD, and technical teams to ensure that the volume delivers on the creative vision while maintaining efficiency, consistency, and technical excellence. What you will be responsible for Stage Operations & Oversight Oversee daily operations of the LED volume, ensuring technical and creative readiness for each shoot Supervise screen calibration, content playback systems, camera tracking, and real-time sync across systems Manage volume crew including system techs, playback operators, tracking supervisors, and media wranglers Serving as a front-line support representative of Qube on the field to clients, vendors, and other partners Cross-Department Collaboration Work closely with the Director of Photography, Virtual Art Department (VAD), and Production Designers to ensure environments are optimised for camera and lighting Interface with the Unreal Engine team for environment testing and approvals Coordinate with the ICVFX Supervisor and production team to ensure pre-shoot workflows are followed Participate in any required client or market-specific training, calls, meetings, etc. Technical Leadership Own and optimise the volume pipeline including tracking systems, media servers, genlock/timecode sync, etc. Maintain a deep understanding of render performance, latency management, Display configuration, and physical volume hardware Collaborate with engineering and R&D teams to implement upgrades and new capabilities Planning & Scheduling Manage shoot schedules, tech prep days, and volume resets in collaboration with Production and Line Producers Evaluate scene complexity and ensure all technical requirements (tracking, playback, lighting integration) are accounted for Create and manage shot-specific volume configurations Quality Assurance & Troubleshooting Monitor output quality during rehearsals and takes; quickly troubleshoot any on-set issues Maintain show continuity logs for playback, calibration states, and tracking environments Maintain accurate records for assets, appliances, maintenance logs and preventive maintenance Enforce best practices for data integrity, content versioning, and screen health Team Management & Training Lead, train, and mentor a team of operators and technicians for ongoing stage operations Work with HR/TechOps to recruit and onboard freelance or full-time volume crew Foster a safe, collaborative, and high-performance on-set environment What we are looking for Experience: 6–10 years in film/TV production, with 3+ years in LED volume/virtual production leadership. Technical Expertise: Deep knowledge of real-time rendering (Unreal Engine), camera tracking systems, LED hardware, media servers and timecode sync. On-Set Experience: Strong familiarity with cinematography, lighting for LED volumes, and the pace of physical production environments. Leadership: Proven experience managing cross-functional teams in high-pressure situations. Workflow Knowledge: Understanding of the full virtual production pipeline including VAD handoff, content testing, playback integration, and ICVFX best practices. Problem Solving: Calm under pressure with a solutions-first mindset and ability to troubleshoot technical and creative issues in real time. Communication: Excellent interpersonal skills to manage expectations across departments and clearly articulate needs, risks, and timelines. Powered by JazzHR NJ1RKwCqSQ
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France