Jobs
Interviews

5008 Latency Jobs - Page 26

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the Team Roku pioneered TV streaming and continues to innovate and lead the industry. Continued success relies on investing in the Roku Content Platform, so we deliver high quality streaming TV experience at a global scale. As part of our Content Platform team you join a small group of highly skilled engineers, that own significant responsibility in crafting, developing and maintaining our large-scale backend systems, data pipelines, storage, and processing services. We provide all insights in regard to all content on Roku Devices. About the Role We are looking for a Senior Software Engineer with vast experience in backend development, Data Engineering and Data Analytics to focus on building next level content platform and data intelligence, which empowers Search, Recommendation, and many more critical systems across Roku Platform. This is an excellent role for a senior professional who enjoys a high level of visibility, thrives on having a critical business impact, able to make critical decisions and is excited to work on a core data platform component which is crucial for many streaming components at Roku. What You’ll Be Doing Work closely with product management team, content data platform services, and other internal consumer teams to contribute extensively to our content data platform and underlying architecture. Build low-latency and optimized streaming and batch data pipelines to enable downstream services. Build and support our Micro-services based Event-Driven Backend Systems & Data Platform. Design and build data pipelines for batch, near-real-time, and real-time processing. Participate in architecture discussions, influence product roadmap, and take ownership and responsibility over new projects. We’re excited if you have 8+ years professional experience as a Software Engineer. Proficiency in Java/Scala/Python. Deep understanding of backend technologies, architecture patterns, and best practices, including microservices, RESTful APIs, message queues, caching, and databases. Strong analytical and problem-solving skills, data structures and algorithms, with the ability to translate complex technical requirements into scalable and efficient solutions. Experience with Micro-service and event-driven architectures. Experience with Apache Spark and Apache Flink. Experience with Big Data Frameworks and Tools: MapReduce, Hive, Presto, HDFS, YARN, Kafka, etc. Experience with Apache Airflow or similar workflow orchestration tooling for ETL. Experience with cloud platforms: AWS (preferred), GCP, etc. Strong communication and presentation skills. BS in Computer Science; MS in Computer Science preferred. AI literacy and curiosity.You have either tried Gen AI in your previous work or outside of work or are curious about Gen AI and have explored it. Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Greater Delhi Area

On-site

Role Wireless Network Engineer experience 5+ years location delhi ncr hyderabad jd Job Summary: We are seeking a highly experienced Level 3 Wireless Network Engineer with deep expertise in both Cisco and Aruba wireless technologies . The ideal candidate will lead the design, deployment, troubleshooting, and optimization of enterprise-grade wireless infrastructures. You will act as a subject matter expert (SME), resolving high-level wireless issues and driving continuous improvements in performance, scalability, and security. Key Responsibilities: Design and deploy large-scale wireless LAN (WLAN) environments using Cisco WLCs and Aruba (Mobility Controllers, Instant APs, Central). Optimize channel planning , band steering , and power level adjustments to reduce interference and maximize coverage. Architect and implement high-availability wireless designs , including N+1 controller redundancy, LAGs, VRRP/HA groups. Administer Cisco WLCs (9800) and Aruba Mobility Controllers (MM/MC architecture) . Configure WLANs, SSIDs, AAA policies, and AP groups with role-based access control (RBAC). Implement Fast Roaming (802.11r/k/v) , bandwidth throttling , application visibility , and WIPS/WIDS . Manage firmware upgrades , controller failovers , and AP image preloading strategies. Integrate with RADIUS servers (ISE, forescout) for 802.1X authentication. Configure Guest access portals , MAC-based authentication , and Captive Portals (internal and external). Analyze wireless packet captures using Wireshark , or Aruba AirWave/Central . Resolve L2/L3 roaming issues, high latency, client disconnections, and interference problems. Monitor KPIs like SNR, RSSI, retransmission rates, and throughput to identify RF anomalies. Correlate client issues using Aruba Central Required Skills & Experience: 5+ years of experience in enterprise wireless networking . Strong hands-on expertise in both Cisco and Aruba wireless ecosystems . Proficiency with: Cisco Wireless LAN Controllers (9800 Series, AireOS, Catalyst APs) Aruba Controllers , Instant APs , Aruba Central , AirWave Wireless security protocols : WPA2/WPA3, 802.1X, PSK, MAC auth Authentication systems : Cisco ISE RF tuning , mesh networks , client load balancing , high-density deployment Strong understanding of Layer 2/3 networking , VLANs , Multicast , QoS , and DHCP relay . Comfortable with CLI (Cisco IOS/XE, Aruba OS) and web-based UIs . Experience with cloud-managed wireless solutions (Aruba Central, Cisco Meraki is a plus). Familiarity with Wi-Fi 6 and Wi-Fi 6E features and limitations. Preferred Certifications: Cisco Certified Specialist – Enterprise Wireless Aruba Certified Mobility Professional (ACMP) Soft Skills: Strong analytical and troubleshooting skills. Excellent documentation and communication abilities. Proven leadership in high-severity incidents and RCA investigations. Capable of mentoring L1/L2 engineers and leading knowledge transfer sessions. Ability to manage multiple priorities and work independently.

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description Nielsen’s Lineup & Metadata services are the definitive source of channel line-ups, program schedules, and content identifiers that power our audience-measurement products. Accuracy, coverage, and near-real-time delivery are mission-critical: they directly influence ratings quality, client billing, and viewer discovery. The Member of Technical Staff 2(MTS2) is ultimately responsible for delivering technical solutions: starting from the project's onboard until post launch support and including development, testing and user acceptance. Qualifications Responsibilities System Deployment: Build new UI screens and client‑side capture logic in React (TypeScript). Create/extend Java (Spring Boot) micro‑services and GraphQL / REST endpoints. Model and persist data in PostgreSQL / Aurora and S3 / Parquet for analytics. CI/CD Implementation: Leverage CI/CD pipelines for automated build, test, and deployment processes. Ensure continuous integration and delivery of features, improvements, and bug fixes. Code Quality and Best Practices: Adhere to coding standards, best practices, and design principles. Participate in code reviews and provide constructive feedback to maintain high code quality. Performance Optimization: Profile React bundles, optimise API latency, instrument with Prometheus/Grafana, and help triage production issues. Team Collaboration: Follow best practices. Collaborate with cross-functional teams to ensure a cohesive and unified approach to software development. Security and Compliance: Implement security best practices for both client and upload components. Adhere to industry standards and regulations related to web application security. Key Skills Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field. Frontend: React (16+), TypeScript, Redux/Context, Jest/RTL. Backend: Java 17, Spring Boot, REST/GraphQL, Kafka or Kinesis. Databases: SQL (PostgreSQL), MongoDB, schema design, performance tuning Testing: JUnit 5, Cypress/Playwright, Pact/contract testing. Good understanding of CI/CD principles and tools. Good problem-solving and debugging skills. Good communication and collaboration skills with ability to communicate technical concepts Utilizes team collaboration to contribute to innovative solutions efficiently Other Good To Have Skills Exposure to the AWS tech stack. Working knowledge of Java and SQL Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.

Posted 1 week ago

Apply

2.0 - 3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Details: Job Description Stefanini Group is a multinational company with a global presence in 41 countries and 44 languages, specializing in technological solutions. We believe in digital innovation and agility to transform businesses for a better future. Our diverse portfolio includes consulting, marketing, mobility, AI services, service desk, field service, and outsourcing solution Software Developer Must have deep technical expertise in different Dynatrace components. They will provide technical Assistance and consulting across the organization, including project-level planning for Dynatrace rollouts. They will Help/Mentor other application teams of all levels, and helping to grow future talent within the organization, and to gain the most value from the product. Job Requirements Details: Mandatory Skills Minimum of 2-3 years of experience on administration of Dynatrace and on-boarding complex applications to enterprise APM tool Dynatrace associate/ professional certifications are preferred Minimum of 3 years of experience with on-premises infrastructure and applications, cloud hosted Infrastructure-as-a-Service and Platform-as-a-Service capabilities including virtual networks, virtual machines and data services. Proficiency in configuring, managing, and troubleshooting Dynatrace environments Experience with performance data analysis, performance tuning, and performance monitoring for SaaS/Cloud based applications Ability to perform technical assessment, requirement capture and analysis, workload modelling, Scripting, dashboards or scorecards Ability to understand Dynatrace monitoring tools and processes Hands-on experience with scripting languages and development Nice to have Expertise in Migration Dynatrace from one Tenant to other Expertise in implementing and creating Dynatrace Extensions/ Plugins Collaborates with end users and senior management to define business requirements for complex systems development and gain buy-in for all monitoring plans. Responsible for ensuring non-functional requirements for performance (such as throughput, latency, memory usage, etc.) are identified, implemented and met. Develop models for performance testing based on software activity. Identify and report performance bottlenecks. Good to have knowledge on ITSM tools like BMC/ Servicenow/Splunk

Posted 1 week ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Siemens Digital Industries Software is a leading provider of solutions for the design, simulation, and manufacture of products across many different industries. Formula 1 cars, skyscrapers, ships, space exploration vehicles, and many of the objects we see in our daily lives are being conceived and manufactured using our Product Lifecycle Management (PLM) software. We are seeking AI Backend Engineers to play a pivotal role in building our Agentic Workflow Service and Retrieval-Augmented Generation (RAG) Service. In this hybrid role, you'll leverage your expertise in both backend development and machine learning to create robust, scalable AI-powered systems using AWS Kubernetes, Amazon Bedrock models, AWS Strands Framework, and LangChain / LangGraph. Understanding of and expertise in: Design and implement core backend services and APIs for agentic framework and RAG systems LLM-based applications using Amazon Bedrock models RAG systems with advanced retrieval mechanisms and vector database integration Implement agentic workflows using technologies such as AWS Strands Framework, LangChain / LangGraph Design and develop microservices that efficiently integrate AI capabilities Create scalable data processing pipelines for training data and document ingestion Optimize model performance, inference latency, and overall system efficiency Implement evaluation metrics and monitoring for AI components Write clean, maintainable, and well-tested code with comprehensive documentation Collaborate with multiple multi-functional team members including DevOps, product, and frontend engineers Stay ahead of with the latest advancements in LLMs and AI agent architectures Minimum Experience Requirements 6+ years of total software engineering experience Backend Development Experience With Strong Python Programming Skills Experience in ML/AI engineering, particularly with LLMs and generative AI applications Experience with microservices architecture, API design, and asynchronous programming Demonstrated experience building RAG systems and working with vector databases LangChain/LangGraph or similar LLM orchestration frameworks Solid understanding of AWS services, particularly Bedrock, Lambda, and container services Experience with containerization technologies and Kubernetes Understanding of ML model deployment, serving, and monitoring in production environments Knowledge of prompt engineering and LLM fine-tuning techniques Excellent Problem-solving Abilities And System Design Skills Strong communication skills and ability to explain complex technical concepts Experience in Kubernetes, AWS Serverless Experience in working with Databases (SQL, NoSQL) and data structures Ability to learn new technologies quickly Preferred Qualifications: Must have AWS certifications - Associate Architect / Developer / Data Engineer / AI Track Must have familiarity with streaming architectures and real-time data processing Must have experience with ML experiment tracking and model versioning Must have understanding of ML/AI ethics and responsible AI development Experience with AWS Strands Framework Knowledge of semantic search and embedding models Contributions to open-source ML/AI projects We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We are Siemens A collection of over 377,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we welcome applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and creativity and help us shape tomorrow! We offer a comprehensive reward package which includes a competitive basic salary, bonus scheme, generous holiday allowance, pension, and private healthcare. Siemens Software. ‘Transform the everyday' , #SWSaaS

Posted 1 week ago

Apply

5.0 years

0 Lacs

Greater Delhi Area

On-site

Role Datacenter Network Engineer experience 5+ years location delhi noida hyderabad jd We are seeking a seasoned Level 3 Datacenter Network Engineer with in-depth experience in Cisco and Aruba networking technologies . The ideal candidate will lead the design, implementation, and troubleshooting of high-performance data center infrastructure with a focus on high availability, security, and scalability . You will serve as a technical SME for complex network problems, architecture decisions, and cross-platform integrations in modern hybrid environments. Key Responsibilities: Design & Architecture: Design and deploy resilient, high-speed, low-latency network topologies across data center fabric using Cisco Nexus (3K/5K/7K/9K) and Aruba CX switches. Architect Layer 2/Layer 3 networks with: VXLAN/EVPN overlays BGP EVPN , OSPF , LACP VPC, MLAG, VSX Design and implement fabric-based topologies and Spine-Leaf architectures .. Configure Cisco Nexus (NX-OS) and Aruba CX (AOS-CX) Deploy and manage L2/L3 segmentation , VRFs , VLANs , SVIs , Private VLANs , and inter-VLAN routing . Implement routing redistribution , policy-based routing , HSPR/VRRP/GLBP , and route filtering . Work with overlay technologies (VXLAN, LISP) and underlay IP fabric (BGP/OSPF). Automation & Monitoring: Automate provisioning using Ansible, Python , or REST APIs for Aruba CX and Cisco Nexus (via NX-API or Cisco DNAC). Utilize NetEdit (Aruba) , Aruba Fabric Composer , and Cisco DCNM or APIC for centralized management. Perform capacity planning , failover testing , and performance tuning of the datacenter network. Lead root cause analysis (RCA) and troubleshooting of L2/L3 issues, link flaps, STP/RSTP, convergence issues, or fabric-related errors. Serve as Tier 3 escalation point for critical incidents and collaborate with vendors (TAC/Support). Create detailed network diagrams , HLDs/LLDs , and SOPs for operational teams. Perform firmware upgrades , code validation , and manage change controls (ITIL-based processes). Required Skills & Experience: 5+years of experience in enterprise or data center networking . Deep hands-on experience with: Cisco Nexus platforms (9K, 7K, 5K, 3K) running NX-OS Aruba CX switches (6300, 6400, 8325, 8400) with AOS-CX Fabric technologies : VxLAN/EVPN, VSX, MLAG, VPC, LAG Routing protocols : BGP, OSPF, static routing L2 protocols : STP, RSTP, MST, LACP, CDP/LLDP Expertise in: CLI troubleshooting Packet captures with SPAN/RSPAN QoS design , queuing strategies, and buffer tuning Dual-stack networking (IPv4/IPv6) Preferred Certifications: Cisco Certified Network Professional (CCNP – Data Center or Enterprise) Soft Skills: Excellent troubleshooting and diagnostic skills under pressure. Strong documentation, diagramming, and presentation abilities. Self-driven with the ability to work independently and lead projects. Clear communication across technical and business teams. Ability to mentor junior engineers and support staff.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Thane, Maharashtra, India

On-site

Job Title: Senior Audio Hardware Engineer Location: Thane, MH About the Role: Namaskaram! We are looking for a highly skilled Audio Hardware Engineer to join our team and design and develop advanced audio systems for our next-generation products. In this role, you will be responsible for architecting and implementing high-performance audio hardware subsystems—from microphones and speakers to amplifiers and audio codecs. Key Responsibilities: Design, prototype, and validate analog and digital audio circuits, including microphone arrays, speaker drivers, amplifiers, and codecs. Schematic design, PCB layout reviews, and bring-up of audio hardware. Conduct in-depth analysis and optimization of audio quality, including signal integrity, noise reduction, echo cancellation, and distortion control. Evaluate and select audio components (MEMS mics, DACs/ADCs, amps, etc.) based on performance, cost, and power consumption. Own the hardware validation and test plan for audio systems using test equipment such as Audio Precision, oscilloscopes, and spectrum analyzers. Work closely with manufacturing partners to resolve issues related to audio hardware during prototyping and mass production. Required Qualifications: Bachelor’s or Master’s degree in Electrical Engineering, Electronics, or a related field. 5+ years of hands-on experience in audio hardware design and development. Proven experience with audio signal path design, including both analog and digital domains. Proficiency with schematic capture and PCB layout tools (e.g., Altium Designer, Cadence). Experience with electroacoustic testing, tuning, and validation. Strong understanding of audio performance metrics (THD+N, SNR, latency, etc.). Familiarity with EMI/EMC mitigation techniques and low-noise analog design. Experience working on consumer electronics, wearables, or AR/VR products is a strong plus. Nice to Have: Experience with beamforming, active noise cancellation (ANC), or echo cancellation (AEC). Understanding of Bluetooth audio (A2DP, BLE Audio) and relevant audio transmission standards. Familiarity with voice assistants and wake-word detection architectures. Experience collaborating with DSP and algorithm teams.

Posted 1 week ago

Apply

15.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Business Analyst Lead – Generative AI Experience: 7–15 Years Location: Bangalore Designation Level: Lead Role Overview: We are looking for a Business Analyst Lead with a strong grounding in Generative AI to bridge the gap between innovation and business value. In this role, you'll drive adoption of GenAI tools (LLMs, RAG systems, AI agents) across enterprise functions, aligning cutting-edge capabilities with practical, measurable outcomes. Key Responsibilities: 1. GenAI Strategy & Opportunity Identification Collaborate with cross-functional stakeholders to identify high-impact Generative AI use cases (e.g., AI-powered chatbots, content generation, document summarization, synthetic data). Lead cost-benefit analyses (e.g., fine-tuning open-source models vs. adopting commercial LLMs like GPT-4 Enterprise). Evaluate ROI and adoption feasibility across departments. 2. Requirements Engineering for GenAI Projects Define and document both functional and non-functional requirements tailored to GenAI systems: Accuracy thresholds (e.g., hallucination rate under 5%) Ethical guardrails (e.g., PII redaction, bias mitigation) Latency SLAs (e.g., <2 seconds response time) Develop prompt engineering guidelines, testing protocols, and iteration workflows. 3. Stakeholder Collaboration & Communication Translate technical GenAI concepts into business-friendly language. Manage expectations on probabilistic outputs and incorporate validation workflows (e.g., human-in-the-loop review). Use storytelling and outcome-driven communication (e.g., “Automated claims triage reduced handling time by 40%.”) 4. Business Analysis & Process Modeling Create advanced user story maps for multi-agent workflows (AutoGen, CrewAI). Model current and future business processes using BPMN to reflect human-AI collaboration. 5. Tools & Technical Proficiency Hands-on experience with LangChain, LlamaIndex for LLM integration. Knowledge of vector databases, RAG architectures, LoRA-based fine-tuning. Experience using Azure OpenAI Studio, Google Vertex AI, Hugging Face. Data validation using SQL and Python; exposure to synthetic data generation tools (e.g., Gretel, Mostly AI). 6. Governance & Performance Monitoring Define KPIs for GenAI performance: Token cost per interaction User trust scores Automation rate and model drift tracking Support regulatory compliance with audit trails and documentation aligned with EU AI Act and other industry standards. Required Skills & Experience: 7–10 years of experience in business analysis or product ownership, with recent focus on Generative AI or applied ML. Strong understanding of the GenAI ecosystem and solution lifecycle from ideation to deployment. Experience working closely with data science, engineering, product, and compliance teams. Excellent communication and stakeholder management skills, with a focus on enterprise environments. Preferred Qualifications: Certification in Business Analysis (CBAP/PMI-PBA) or AI/ML (e.g., Coursera/Stanford/DeepLearning.ai) Familiarity with compliance and AI regulations (GDPR, EU AI Act). Experience in BFSI, healthcare, telecom, or other regulated industries.

Posted 2 weeks ago

Apply

0 years

1 - 1 Lacs

Bilāspur

On-site

Qualification: B.E./B.Tech in ECE, CSE, IT, EEE, or equivalent (AICTE/UGC recognized)Key Responsibilities* Supervise and monitor design, deployment & maintenance of network infrastructure* Oversee installation, configuration, and testing of routers, switches, GPON/OLT/ONT* Conduct field-level quality checks to ensure project compliance* Support troubleshooting for network connectivity & configuration issues* Monitor performance metrics – latency, signal strength, throughput* Maintain detailed site visit logs, test reports & quality documentation* Ensure network security protocols and standard practices are followed* Coordinate with PIAs, OEMs, and State NOC for validations and performance reviews- Travel across district project sites as required under the Bharat Net implementation plan Preferred Skills* Knowledge of Fiber Optics, GPON, IP/MPLS, L2/L3 Networking* Hands-on experience with network testing tools & signal analysis* Strong communication, problem-solving, and teamwork skills- Willingness to travel extensively across project locations Interested candidates can share their CV at cv@medhaj.com and mark your cc jhooma@medhaj.com if any query please call or whatsApp #8090454724 Job Type: Full-time Pay: ₹13,800.00 - ₹15,000.00 per month Benefits: Cell phone reimbursement Work Location: In person

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Hyderābād

On-site

Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. Role Overview We are looking for a highly skilled Senior Engineer who will lead SSD failure analysis (FA) , debug , and performance validation activities for NVMe-based products. This role is execution-focused with a deep emphasis on hands-on debugging, test script ownership, and performance analysis, while firmware development is secondary. Key Responsibilities Own and lead first-level FA and debug of SSD firmware issues — triage logs, isolate failures, and identify root causes. Drive execution of validation and performance testing , including tracking test failures, generating debug reports, and working with developers to implement fixes. Develop, maintain, and optimize performance test scripts (e.g., IOPS, latency, throughput) for SSD firmware validation. Perform latency profiling, throughput analysis , and trace interpretation to identify bottlenecks or firmware-level inefficiencies. Analyze logs from NVMe/PCIe-based SSD systems to identify protocol-level or firmware-level faults. Support issue recreation in lab setups , handle escalations from validation or system teams, and communicate findings clearly. Coordinate with cross-functional teams (firmware dev, validation, hardware, product engineering) to drive quick resolution. Maintain and enhance debug infrastructure, trace capture frameworks, and automation tools for validation teams. Contribute to execution strategy, milestone planning, and prioritization of critical firmware issues for closure. Act as a technical bridge between validation and firmware development teams. Required Experience 4–8 years in SSD firmware domain, specifically in execution, debug, and failure analysis . Strong knowledge of NVMe protocol , NAND flash management, and SSD architecture. Hands-on experience with performance metrics , latency breakdowns , and system profiling . Strong debugging skills with tools like serial logs, logic analyzers, JTAG, and trace decoders. Ability to write, debug, and manage performance-related test scripts (Python, Bash, or similar). Experience with defect tracking tools (e.g., Jira), log analysis, and execution dashboards. Understanding of embedded environments; ARM architecture and C/C++ familiarity is a plus (reading/modifying code only). About Micron Technology, Inc. We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all . With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more, please visit micron.com/careers All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. To request assistance with the application process and/or for reasonable accommodations, please contact hrsupport_india@micron.com Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards. Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.

Posted 2 weeks ago

Apply

7.0 years

10 Lacs

Hyderābād

On-site

About Us: Please be aware we have noticed an increase in hiring scams potentially targeting Seismic candidates. Read our full statement on our Careers page. Seismic is the global leader in AI-powered enablement, empowering go-to-market leaders to drive strategic growth and deliver exceptional customer experiences at scale. The Seismic Enablement Cloud™ is the only unified AI-powered platform that prepares customer-facing teams with the skills, content, tools, and insights needed to maximize every buyer interaction and strengthen client relationships. Trusted by more than 2,000 organizations worldwide, Seismic helps businesses achieve measurable outcomes and accelerate revenue growth. Seismic is headquartered in San Diego with offices across North America, Europe, Asia and Australia. Learn more at seismic.com. Seismic is committed to building an inclusive workplace that ignites growth for our employees and creates a culture of belonging that allows all employees to be seen and valued for who they are. Learn more about DEI at Seismic here . Overview: Join us at Seismic, a cutting-edge technology company leading the way in the SaaS industry. We specialize in delivering modern, scalable, and multi-cloud solutions that empower businesses to succeed in today's digital era. Leveraging the latest advancements in technology—including Generative AI—we are committed to driving innovation and transforming how enterprise organizations operate. As part of our continued growth, we are expanding our AI and Software Engineering team in Hyderabad, India, and looking for an Engineering Manager to help scale our AI-driven applications and integrations. As an Engineering Manager – AI and Software Engineering, you will lead and manage a team of engineers focused on delivering scalable and intelligent AI capabilities across the Seismic platform and external integrations. Your team will be responsible for engineering solutions that interconnect Seismic’s core AI capabilities with the broader enterprise ecosystem—including Microsoft Teams, M365 Copilot, Salesforce Agentforce, and evolving frameworks such as MCP (Model Context Protocol). This role is ideal for a hands-on technical manager who is excited about building infrastructure and services that interoperate across complex ecosystems while enabling consistent, secure, and intelligent user experiences Seismic AI: AI is one of the fastest growing product areas in Seismic. We believe that AI, particularly Generative AI, will empower and transform how Enterprise sales and marketing organizations operate and interact with customers. Seismic Aura, our leading AI engine, is powering this change in the sales enablement space and is being infused across the Seismic enablement cloud. Our focus is to leverage AI across the Seismic platform to make our customers more productive and efficient in their day-to-day tasks, and to drive more successful sales outcomes. Why Join Us: Opportunity to be a key technical leader in a rapidly growing company and drive innovation in the SaaS industry. Work with cutting-edge technologies and be at the forefront of AI advancements. Competitive compensation package, including salary, bonus, and equity options. A supportive, inclusive work culture. Professional development opportunities and career growth potential in a dynamic and collaborative environment. Who you are:: Experience : 7+ years of experience in software engineering with at least 2 years in a leadership or management role. AI Expertise: Proficiency in building and deploying Generative AI use cases. Experience with Generative AI applications. Technical Expertise: Experience with backend development using one or multiple languages, such as Python, C# and .NET, unit testing, object-oriented programming, and relational databases. Experience with Infrastructure as Code (Terraform, Pulumi, etc.), event-driven architectures with tools like Kafka, and feature management (Launch Darkly) is good to have. Front-end/full-stack experience is a plus. Cloud Expertise: Experience with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure. Knowledge of cloud-native services for AI/ML, data storage, and processing. Experience deploying containerized applications into Kubernetes. SaaS Knowledge: Extensive experience in SaaS application development and cloud technologies, with a deep understanding of modern distributed system and cloud operational infrastructure. Ecosystem Integration Experience : Familiarity with Microsoft 365 extensibility models, Microsoft Teams development, Salesforce integration, and AI orchestration environments (such as MCP or similar agent-based coordination layers) Product Development : Proven ability to translate product needs into scalable solutions. Experience working in SaaS environments and collaborating across functions. Education : Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Mindset : Proactive, hands-on, and delivery-focused, with a strong sense of accountability and attention to detail. What you'll be doing:: Team Leadership and Management: Lead, mentor, and manage a team of engineers, fostering a culture of collaboration, innovation, and continuous learning. Provide performance feedback, career development, and technical guidance to team members. Technology and Architecture Development: Define and drive the AI and software technology and architectural directions, ensuring alignment with business goals and product roadmaps. AI Platform Integration : Oversee the engineering of integrations between Seismic’s AI services and external systems, including Microsoft Teams, M365 Copilot, Salesforce Agentforce, and related enterprise platforms. AI Ecosystem : Track and adopt evolving protocols, APIs, and standards in the AI ecosystem, ensuring that Seismic remains interoperable, extensible, and enterprise-ready. Technical Execution : Guide backend and integration architecture, ensuring the performance, reliability, and security of AI-driven services across multiple environments. Cross-functional Collaboration : Work closely with product managers, data scientists, engineers, and design teams to deliver features that balance performance, privacy, and usability. Project Delivery : Own delivery pipelines from planning through production, including code quality, deployment readiness, and observability of production systems. Metrics & Monitoring : Define and monitor KPIs for integration reliability, data flow, latency, and system interoperability. Job Posting Footer: If you are an individual with a disability and would like to request a reasonable accommodation as part of the application or recruiting process, please click here. Headquartered in San Diego and with employees across the globe, Seismic is the global leader in sales enablement , backed by firms such as Permira, Ameriprise Financial, EDBI, Lightspeed Venture Partners, and T. Rowe Price. Seismic also expanded its team and product portfolio with the strategic acquisitions of SAVO, Percolate, Grapevine6, and Lessonly. Our board of directors is composed of several industry luminaries including John Thompson, former Chairman of the Board for Microsoft. Seismic is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to gender, age, race, religion, or any other classification which is protected by applicable law. Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice. Linkedin Posting Section: #LI-ST1

Posted 2 weeks ago

Apply

16.0 years

5 - 10 Lacs

Hyderābād

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: CI/CD Pipeline Management Design, implement, and maintain Continuous Integration and Continuous Deployment (CI/CD) pipelines Automate testing, building, and deployment processes to ensure rapid and reliable software delivery Infrastructure as Code (IaC) Use tools like Terraform, Ansible, or AWS CloudFormation to manage infrastructure programmatically Maintain version-controlled infrastructure for consistency and scalability Monitoring and Logging Set up and manage monitoring tools (e.g., Prometheus, Grafana, ELK stack) to ensure system health and performance Implement alerting systems to proactively detect and resolve issues Cloud and Containerization Deploy and manage applications in cloud environments (AWS, Azure, GCP) Use containerization tools like Docker and orchestration platforms like Kubernetes Security and Compliance Integrate security practices into the DevOps lifecycle (DevSecOps) Conduct vulnerability scans, manage secrets, and ensure compliance with industry standards Collaboration and Support Work closely with development, QA, and operations teams to streamline workflows Provide support for development environments and troubleshoot deployment issues Performance Optimization Analyze system performance and implement improvements to reduce latency and increase throughput Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: B.Tech/MCA (16+ years of formal education, correspondence courses are not relevant) 6+ years of work experience in Product companies Solid hands-on experience with AWS services (EC2, S3, RDS, Lambda, VPC, IAM, etc.) Experience in deploying scalable solutions to complex problems, from defining the problem, implementing the solution, and launching the new product successfully Knowledge of Statistics, Machine Learning Models, Model Deployment, Generative AI Knowledge of networking concepts, DNS, load balancing, and firewalls Exposure to agile development process Proven proactive approach to spotting problems, areas for improvement, and performance bottlenecks Proven excellent communication, writing and presentation skills Desire to be proactive and work towards implementing Preferred Qualifications: AWS certifications (e.g., AWS Certified DevOps Engineer, Solutions Architect) Azure basic knowledge Familiarity with serverless architecture and tools (e.g., AWS Lambda, API Gateway) Exposure to Agile/Scrum methodologies At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 2 weeks ago

Apply

10.0 years

5 - 10 Lacs

Hyderābād

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Leadership & Strategy Lead and mentor cross-functional teams focused on AI/ML incubation and engineering enablement Define and drive the strategic roadmap for innovation initiatives and technical enablement Foster a culture of experimentation, rapid iteration, and continuous learning Incubation & Innovation Identify, evaluate, and incubate high-impact AI/ML ideas aligned with business goals Oversee the development of proof-of-concepts (PoCs) and prototypes to validate new technologies and approaches Collaborate with product, research, and business stakeholders to prioritize and refine ideas Engineering Enablement Build and scale internal platforms, tools, and frameworks to accelerate AI/ML development Establish best practices, coding standards, and reusable components for AI/ML engineering Provide technical guidance and support to engineering teams adopting AI/ML technologies Collaboration & Communication Act as a bridge between research, engineering, and product teams to ensure alignment and knowledge transfer Present technical concepts and project outcomes to executive leadership and stakeholders Promote knowledge sharing through documentation, workshops, and internal communities of practice Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field 10+ years of experience in software engineering with 3+ years in management Experience in React, JavaScript, TypeScript, Node.js, React Native, Java, Python, and Spring Boot Frameworks Experience in Data Management of RDBMS and NoSQL Experience in building/managing CI/CD pipelines of Jenkins, Github Actions etc. Proven experience in AI/ML technologies, including model development, deployment, and MLOps Solid understanding of cloud platforms (e.g. Azure, GCP) with hands-on experience on AWS and Terraforms Proven excellent communication, collaboration, and organizational skills Preferred Qualifications: Experience with LLMs, generative AI, or reinforcement learning Experience with agile methodologies and startup-like environments Familiarity with open-source AI/ML tools and frameworks (e.g., TensorFlow, PyTorch, MLflow) Knowledge of Machine Learning Algorithms: Supervised, unsupervised, reinforcement learning Knowledge of Model Evaluation: Precision, recall, F1-score, ROC-AUC Knowledge of Data Preprocessing: Feature engineering, normalization, handling missing data Knowledge of Responsible AI: Fairness, explainability (XAI), bias mitigation Knowledge of AutoML: Hyperparameter tuning, model selection automation Knowledge of Federated Learning & Edge AI: For privacy-preserving and low-latency applications Deep Learning: CNNs, RNNs, Transformers, GANs Background in innovation labs, R&D, or technical incubation environments At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderābād

On-site

Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Assistant Vice President– Generative AI – Systems Architect Role Overview: We are looking for an experienced Systems Architect with extensive experience in designing and scaling Generative AI systems to production. This role requires an individual with deep expertise in system architecture, software engineering, data platforms, and AI infrastructure, who can bridge the gap between data science, engineering and business. You will be responsible for end-to-end architecture of Gen.AI systems including model lifecycle management, inference, orchestration, pipelines Key Responsibilities: Architect and design end-to-end systems for production-grade Generative AI applications (e.g., LLM-based chatbots, copilots, content generation tools). Define and oversee system architecture covering data ingestion, model training/fine-tuning, inferencing, and deployment pipelines. Establish architectural tenets like modularity, scalability, reliability, observability, and maintainability. Collaborate with data scientists, ML engineers, platform engineers, and product managers to align architecture with business and AI goals. Choose and integrate foundation models (open source or proprietary) using APIs, model hubs, or fine-tuned versions. Evaluate and design solutions based on architecture patterns such as Retrieval-Augmented Generation (RAG), Agentic AI, Multi-modal AI, and Federated Learning. Design secure and compliant architecture for enterprise settings, including data governance, auditability, and access control. Lead system design reviews and define non-functional requirements (NFRs), including latency, availability, throughput, and cost. Work closely with MLOps teams to define the CI /CD processes for model and system updates. Contribute to the creation of reference architectures, design templates, and reusable components. Stay abreast of the latest advancements in GenAI , system design patterns, and AI platform tooling. Qualifications we seek in you! Minimum Qualifications Proven experience designing and implementing distributed systems, cloud-native architectures, and microservices. Deep understanding of Generative AI architectures, including LLMs, diffusion models, prompt engineering, and model fine-tuning. Strong experience with at least one cloud platform (AWS, GCP, or Azure) and services like SageMaker, Vertex AI, or Azure ML. Experience with Agentic AI systems or orchestrating multiple LLM agents. Experience with multimodal systems (e.g., combining image, text, video, and speech models). Knowledge of semantic search, vector databases, and retrieval techniques in RAG. Familiarity with Zero Trust architecture and advanced enterprise security practices. Experience in building developer platforms/toolkits for AI consumption. Contributions to open-source AI system frameworks or thought leadership in GenAI architecture. Hands-on experience with tools and frameworks like LangChain , Hugging Face, Ray, Kubeflow, MLflow , or Weaviate /FAISS. Knowledge of data pipelines, ETL/ELT, and data lakes/warehouses (e.g., Snowflake, BigQuery , Delta Lake). Solid grasp of DevOps and MLOps principles, including containerization (Docker), orchestration (Kubernetes), CI/CD pipelines, and model monitoring. Familiarity with system design tradeoffs in latency vs cost vs scale for GenAI workloads. Preferred Qualifications: Bachelor’s or Master’s degree in computer science, Engineering, or related field. Experience in software/system architecture, with experience in GenAI /AI/ML. Proven experience designing and implementing distributed systems, cloud-native architectures, and microservices. Strong interpersonal and communication skills; ability to collaborate and present to technical and executive stakeholders. Certifications in cloud platforms (e.g., AWS Certified Solutions Architect, Microsoft Certified: Azure Solutions Architect Expert, Google Cloud Professional Data Engineer). Familiarity with data governance and security best practices. Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Assistant Vice President Primary Location India-Hyderabad Schedule Full-time Education Level Master's / Equivalent Job Posting Jul 22, 2025, 12:35:15 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderābād

On-site

Description Join the Home Lending Infrastructure Automation team where you'll play a crucial role in supporting the expansion of data automated testing across Home Lending. As a Senior Testing Associate in Home Lending, you will collaborate with cross-functional teams to contribute to the strategic data automation planning and execution across a diverse application suite. Job Responsibilities: Collaborate with cross-functional stakeholders to develop high-volume, low-latency test automation tools for testing Home Lending Data across multiple platforms. Write efficient, testable code in AWS to ensure data is thoroughly validated and tested and utilizing AWS insights to report data quality results. Assist in ensuring seamless code integration into Home Lending Tech's CI/CD build and deploy pipelines. Support the definition and documentation of automation-focused test strategies for products and applications. Contribute to continuous improvement by exploring innovative solutions for design review and implementation. Participate in implementing ideas from concept through to execution, including root cause analysis. Assist in managing timelines and dependencies while liaising with stakeholders and functional partners. Required Qualifications, Capabilities, and Skills: Bachelor's Degree in Computer Science, Information Technology, or a related field. Experience in writing code in AWS and leveraging AWS tools for testing processes. Familiarity with the Software Development Life Cycle and ability to contribute to various phases. Experience with databases (Oracle, MySQL, SQL Server) and proficiency in writing queries. Experience with functional testing automation tools (Selenium, Java, Cucumber, Python) and with API test automation (Rest Assured, SoapUI, Postman) Experience with CI/CD environments and tools (Jenkins) and parallel execution (Selenium Grid). SQL proficiency for effective data querying and analysis. Preferred Qualifications, Capabilities, and Skills: Ability to assist in building and implementing architectural designs that enhance testing processes. Experience in data visualization tools (e.g., Tableau, Alteryx) for enhanced reporting and insights. Effective collaboration skills with engineering, design, and business teams. Strong organization and time management capabilities. A team player eager to collaborate with others. Demonstrates strong problem-solving skills and innovative thinking. Shows a proactive approach to learning and adapting to new technologies.

Posted 2 weeks ago

Apply

2.0 years

5 - 18 Lacs

India

On-site

Job Title : Data Engineer with Strong Communication Skills Data Engineer who not only excels in building scalable and efficient data pipelines, but also possesses exceptional communication and presentation skills. This role bridges technical execution with cross-functional collaboration, requiring the ability to explain complex data infrastructure in simple terms to technical and non-technical stakeholders alike. Key Responsibilities Data Pipeline Development : Design, build, and maintain robust ETL/ELT pipelines using modern data engineering tools (e.g., Spark, Airflow, dbt). Data Modeling : Develop and optimize data models (star/snowflake schemas) that support analytics and reporting use cases. Collaboration & Communication : Work closely with data analysts, data scientists, and business teams to understand data needs. Deliver clear, engaging presentations of data architecture, pipeline performance, and data quality issues. Translate business requirements into scalable engineering solutions and explain trade-offs in plain language. Data Quality & Governance : Implement and monitor data quality checks and ensure compliance with data governance policies. Documentation : Produce clear technical documentation and present data lineage and infrastructure to non-technical teams. Required Qualifications Bachelor's or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field. 2+ years of experience in data engineering or a related role. Proficiency in SQL, Python, and at least one big data platform (e.g., Hadoop, Spark, Snowflake, Redshift). Experience with orchestration tools (e.g., Apache Airflow) and cloud data services (AWS, GCP, or Azure). Exceptional verbal and written communication skills. Comfortable speaking in team meetings, executive briefings, and client-facing settings. Preferred Qualifications Experience working in a cross-functional, agile team environment. Familiarity with data privacy regulations (GDPR, CCPA). Previous experience presenting at internal town halls, data literacy sessions, or industry meetups. Experience with visualization tools (e.g., Looker, Tableau) is a plus. What Sets You Apart You're as confident writing efficient SQL as you are explaining pipeline latency to a product manager. You thrive at the intersection of data infrastructure and human communication. You believe that data engineering should be as accessible and transparent as it is powerful. Benefits Competitive salary and stock options Flexible work environment Health, dental, and vision insurance Learning & development budget Opportunities to present at internal or external conferences Job Type: Full-time Pay: ₹505,184.34 - ₹1,843,823.89 per year Work Location: In person

Posted 2 weeks ago

Apply

2.0 - 4.0 years

12 - 15 Lacs

Gurgaon

On-site

Job Title: AI/ML Engineer – FastAPI, FAISS & Vector Databases Location: Gurgaon (On-site) Job Type: Full-Time Job Overview: We’re hiring an AI/ML Engineer who can build scalable and intelligent systems using FastAPI , FAISS , and vector search technologies . If you’re passionate about retrieval-augmented generation (RAG), semantic search, and deploying real-world ML pipelines, we’d love to meet you! Key Responsibilities: Build, optimize, and maintain scalable ML APIs using FastAPI Implement and manage vector-based search systems using FAISS , Pinecone , or similar tools Create and serve embeddings using transformer models for semantic search, recommendations, or chatbot systems Design and deploy RAG (retrieval-augmented generation) pipelines using LLMs Work with large unstructured datasets (text, documents, etc.) to extract features and build indexes Collaborate with data scientists and backend engineers to integrate models into production Monitor and improve model performance and API latency in production environments Required Skills: Strong experience with Python and FastAPI Hands-on with FAISS and vector similarity search concepts Familiarity with sentence-transformers , Hugging Face , or OpenAI embeddings Working knowledge of vector databases like FAISS, Weaviate, Pinecone, or Qdrant Experience with NLP pipelines , tokenization , and text preprocessing Comfortable with RESTful APIs, JSON, and basic cloud deployment (e.g., AWS/GCP) Bonus/Good to Have: Experience with LangChain , LLMs , or RAG architecture MLOps practices for model serving and monitoring Exposure to Docker , Kubernetes , Airflow , or CI/CD pipelines Prior experience deploying LLM-based search or Q&A systems Ideal Candidate Profile: 2–4 years of experience in AI/ML engineering or backend roles with ML focus Comfortable building APIs that connect machine learning models to real users Familiar with working in fast-paced, agile environments Curious about LLMs, embeddings, and real-time AI applications Job Type: Full-time Pay: ₹1,200,000.00 - ₹1,500,000.00 per year Work Location: In person Application Deadline: 25/07/2025 Expected Start Date: 01/08/2025

Posted 2 weeks ago

Apply

0 years

2 - 3 Lacs

Gurgaon

On-site

Ready to build the future with AI? At Genpact, we don’t just keep up with technology—we set the pace. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what’s possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Lead Associate – EUC Platform We are seeking candidates with good hands-on experience in macOS support, Agentic systems, and application packaging. A foundational understanding of integrating Generative AI capabilities—such as working with OpenAI APIs and prompt engineering—is highly desirable. The role also requires experience with cloud deployments and a solid working knowledge of device management tools like SCCM or equivalent Responsibilities Design, develop, and implement solutions leveraging Generative AI technologies, including integration of OpenAI APIs into enterprise workflows or applications. Create, test, and optimize prompts for various use cases to ensure accurate and contextually relevant AI responses. Collaborate with cross-functional teams to identify opportunities for AI-driven automation, personalization, or decision support. Monitor API usage, performance, and cost to ensure scalable and efficient implementation of AI features. Stay updated on advancements in large language models (LLMs), prompt engineering techniques, and AI safety best practices. Build reusable prompt libraries and maintain documentation for internal teams and stakeholders. Troubleshoot AI integration issues and provide guidance on fine-tuning responses for business-specific requirements. Thin Clients, MACs, Chrome Books support added advantage. Excellent in customer calls/Meeting/escalation handling & delivery for 24/7 IT operations, able to work on pressure. Good communication Skills Inspect and troubleshoot HTTP request headers, payloads, and responses from OpenAI APIs, addressing issues such as authentication failures, rate limits, timeouts, and malformed inputs. Analyze prompt structures to diagnose inconsistent or irrelevant AI responses, and refine instructions to reduce issues like hallucinations, leakage, or unexpected completions. Investigate and resolve slow API response times or high inference latency caused by model selection or integration bottlenecks; optimize prompt design and leverage batching where applicable. Address issues related to token limit exceedance by implementing prompt truncation or response summarization strategies. Troubleshoot errors caused by exceeded API rate limits or usage quotas, and apply appropriate handling mechanisms. Resolve access issues related to network environments, including firewall restrictions, VPN conflicts, or proxy configurations. Monitor and adapt to changes in OpenAI API versions or deprecated endpoints to ensure integration stability. Investigate potential data privacy risks, unsecured connections, or policy non-compliance in AI integrations. Configure error logging and monitoring systems to capture failed interactions, analyze edge cases, and continuously improve prompt and API performance. Qualifications Minimum Qualifications Bachelor’s degree or certification from an accredited institution in technology good experience or equivalent training Preferred qualifications Technical certification (MS Azure fundamentals) will be added benefit Excellent communication skills and strong customer service skills Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career —Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Associate Primary Location India-Gurugram Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 22, 2025, 4:04:38 AM Unposting Date Ongoing Master Skills List Consulting Job Category Full Time

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Java SpringBoot must have ; to be trained in Microservices and Banking 101 domain . 3 weeks for training . L1 Raja and team along with communication and L2 from account . Java Angular Microservices Oracle SQL queries must have ; to be trained on PL/SQL and complex queries. 2 weeks for training , L1 Raja and team along with communication and L2 from account Job Description Sr Application Java developer with strong technical knowledge and 6 to 10 years of experience in designing, developing and supporting web based applications using Java technologies. Candidate must have strong experience in Java, J2EE, JavaScript, APIs, Microservices, building APIs and SQL along with excellent verbal and written communication skills. The candidate should have a good experience in developing APIs with the expected output structure and high performance. Should be experienced in implementing APIs based on enterprise-level architecture frameworks and guidelines Writing well designed, testable, efficient Backend, Middleware code and building APIs using Java e.g. Hibernate, Spring Strong experience in designing and developing high-volume and low-latency REST APIs especially based on relational databases such as SQL Server Should be able to build API from scratch based on traditional DB and provide JSON output in requested structure Develop technical designs for application development/Web API Conducting software analysis, programming, testing, and debugging Designing and implementing relational schema in Microsoft SQL, and Oracle. Debugging application/system errors on development, QA and production systems;

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Google cloud build, cloud run, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Experience in HuggingFace, Chainlit, React Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, DataRobots, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Experience in LLM models like PaLM, GPT4, Mistral (open-source models), Work through the complete lifecycle of Gen AI model development, from training and testing to deployment and performance monitoring. Developing and maintaining AI pipelines with multimodalities like text, image, audio etc. Have implemented in real-world Chat bots or conversational agents at scale handling different data sources. Experience in developing Image generation/translation tools using any of the latent diffusion models like stable diffusion, Instruct pix2pix. Expertise in handling large scale structured and unstructured data. Efficiently handled large-scale generative AI datasets and outputs. Familiarity in the use of Docker tools, pipenv/conda/poetry env Comfort level in following Python project management best practices (use of cxzsetup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Qualifications Education : Bachelor’s or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed.

Posted 2 weeks ago

Apply

3.0 - 6.0 years

4 Lacs

India

On-site

About MostEdge MostEdge empowers retailers with smart, trusted, and sustainable solutions to run their stores more efficiently. Through our Inventory Management Service, powered by the StockUPC app , we provide accurate, real-time insights that help stores track inventory, prevent shrink, and make smarter buying decisions. Our mission is to deliver trusted, profitable experiences—empowering retailers, partners and employees to accelerate commerce in a sustainable manner. Job Summary: We are seeking a highly skilled and motivated AI/ML Engineer with a specialization in Computer Vision & Un-Supervised Learning to join our growing team. You will be responsible for building, optimizing, and deploying advanced video analytics solutions for smart surveillance applications , including real-time detection, facial recognition, and activity analysis. This role combines the core competencies of AI/ML modelling with the practical skills required to deploy and scale models in real-world production environments , both in the cloud and on edge devices . Key Responsibilities: AI/ML Development & Computer Vision Design, train, and evaluate models for: Face detection and recognition Object/person detection and tracking Intrusion and anomaly detection Human activity or pose recognition/estimation Work with models such as YOLOv8, DeepSORT, RetinaNet, Faster-RCNN, and InsightFace. Perform data preprocessing, augmentation, and annotation using tools like LabelImg, CVAT, or custom pipelines. Surveillance System Integration Integrate computer vision models with live CCTV/RTSP streams for real-time analytics. Develop components for motion detection , zone-based event alerts , person re-identification , and multi-camera coordination . Optimize solutions for low-latency inference on edge devices (Jetson Nano, Xavier, Intel Movidius, Coral TPU). Model Optimization & Deployment Convert and optimize trained models using ONNX , TensorRT , or OpenVINO for real-time inference. Build and deploy APIs using FastAPI , Flask , or TorchServe . Package applications using Docker and orchestrate deployments with Kubernetes . Automate model deployment workflows using CI/CD pipelines (GitHub Actions, Jenkins). Monitor model performance in production using Prometheus , Grafana , and log management tools. Manage model versioning, rollback strategies, and experiment tracking using MLflow or DVC . As an AI/ML Engineer, you should be well-versed of AI agent development and finetuning experience Collaboration & Documentation Work closely with backend developers, hardware engineers, and DevOps teams. Maintain clear documentation of ML pipelines, training results, and deployment practices. Stay current with emerging research and innovations in AI vision and MLOps. Required Qualifications: Bachelor’s or master’s degree in computer science, Artificial Intelligence, Data Science, or a related field. 3–6 years of experience in AI/ML, with a strong portfolio in computer vision, Machine Learning . Hands-on experience with: Deep learning frameworks: PyTorch, TensorFlow Image/video processing: OpenCV, NumPy Detection and tracking frameworks: YOLOv8, DeepSORT, RetinaNet Solid understanding of deep learning architectures (CNNs, Transformers, Siamese Networks). Proven experience with real-time model deployment on cloud or edge environments. Strong Python programming skills and familiarity with Git, REST APIs, and DevOps tools. Preferred Qualifications: Experience with multi-camera synchronization and NVR/DVR systems. Familiarity with ONVIF protocols and camera SDKs. Experience deploying AI models on Jetson Nano/Xavier , Intel NCS2 , or Coral Edge TPU . Background in face recognition systems (e.g., InsightFace, FaceNet, Dlib). Understanding of security protocols and compliance in surveillance systems. Tools & Technologies: Languages & AI - Python, PyTorch, TensorFlow, OpenCV, NumPy, Scikit-learn Model Serving - FastAPI, Flask, TorchServe, TensorFlow Serving, REST/gRPC APIs Model Optimization - ONNX, TensorRT, OpenVINO, Pruning, Quantization Deployment - Docker, Kubernetes, Gunicorn, MLflow, DVC CI/CD & DevOps - GitHub Actions, Jenkins, GitLab CI Cloud & Edge - AWS SageMaker, Azure ML, GCP AI Platform, Jetson, Movidius, Coral TPU Monitoring - Prometheus, Grafana, ELK Stack, Sentry Annotation Tools - LabelImg, CVAT, Supervisely Benefits: Competitive compensation and performance-linked incentives. Work on cutting-edge surveillance and AI projects. Friendly and innovative work culture. Job Types: Full-time, Permanent Pay: From ₹400,000.00 per year Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Schedule: Evening shift Monday to Friday Morning shift Night shift Rotational shift US shift Weekend availability Supplemental Pay: Performance bonus Quarterly bonus Work Location: In person Application Deadline: 25/07/2025 Expected Start Date: 01/08/2025

Posted 2 weeks ago

Apply

3.0 years

4 Lacs

Vadodara

On-site

About MostEdge MostEdge empowers retailers with smart, trusted, and sustainable solutions to run their stores more efficiently. Through our Inventory Management Service, powered by the StockUPC app , we provide accurate, real-time insights that help stores track inventory, prevent shrink, and make smarter buying decisions. Our mission is to deliver trusted, profitable experiences—empowering retailers, partners and employees to accelerate commerce in a sustainable manner. Job Summary: We are hiring an experienced AI Engineer / ML Specialist with deep expertise in Large Language Models (LLMs) , who can fine-tune, customize, and integrate state-of-the-art models like OpenAI GPT, Claude, LLaMA, Mistral, and Gemini into real-world business applications . The ideal candidate should have hands-on experience with foundation model customization , prompt engineering , retrieval-augmented generation (RAG) , and deployment of AI assistants using public cloud AI platforms like Azure OpenAI, Amazon Bedrock, Google Vertex AI , or Anthropic’s Claude . Key Responsibilities: LLM Customization & Fine-Tuning Fine-tune popular open-source LLMs (e.g., LLaMA, Mistral, Falcon, Mixtral) using business/domain-specific data. Customize foundation models via instruction tuning , parameter-efficient fine-tuning (LoRA, QLoRA, PEFT) , or prompt tuning . Evaluate and optimize the performance, factual accuracy, and tone of LLM responses. AI Assistant Development Build and integrate AI assistants/chatbots for internal tools or customer-facing applications. Design and implement Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain , LlamaIndex , Haystack , or OpenAI Assistants API . Use embedding models , vector databases (e.g., Pinecone, FAISS, Weaviate, ChromaDB ), and cloud AI services. Must have experience of finetuning, and maintaining microservices or LLM driven databases. Cloud Integration Deploy and manage LLM-based solutions on AWS Bedrock , Azure OpenAI , Google Vertex AI , Anthropic Claude , or OpenAI API . Optimize API usage, performance, latency, and cost. Secure integrations with identity/auth systems (OAuth2, API keys) and logging/monitoring. Evaluation, Guardrails & Compliance Implement guardrails , content moderation , and RLHF techniques to ensure safe and useful outputs. Benchmark models using human evaluation and standard metrics (e.g., BLEU, ROUGE, perplexity). Ensure compliance with privacy, IP, and data governance requirements. Collaboration & Documentation Work closely with product, engineering, and data teams to scope and build AI-based solutions. Document custom model behaviors, API usage patterns, prompts, and datasets. Stay up-to-date with the latest LLM research and tooling advancements. Required Skills & Qualifications: Bachelor’s or Master’s in Computer Science, AI/ML, Data Science, or related fields. 3–6+ years of experience in AI/ML, with a focus on LLMs, NLP, and GenAI systems . Strong Python programming skills and experience with Hugging Face Transformers, LangChain, LlamaIndex . Hands-on with LLM APIs from OpenAI, Azure, AWS Bedrock, Google Vertex AI, Claude, Cohere, etc. Knowledge of PEFT techniques like LoRA, QLoRA, Prompt Tuning, Adapters . Familiarity with vector databases and document embedding pipelines. Experience deploying LLM-based apps using FastAPI, Flask, Docker, and cloud services . Preferred Skills: Experience with open-source LLMs : Mistral, LLaMA, GPT-J, Falcon, Vicuna, etc. Knowledge of AutoGPT, CrewAI, Agentic workflows , or multi-agent LLM orchestration . Experience with multi-turn conversation modeling , dialogue state tracking. Understanding of model quantization , distillation , or fine-tuning in low-resource environments. Familiarity with ethical AI practices , hallucination mitigation, and user alignment. Tools & Technologies: LLM Frameworks - Hugging Face, Transformers, PEFT, LangChain, LlamaIndex, Haystack LLMs & APIs - OpenAI (GPT-4, GPT-3.5), Claude, Mistral, LLaMA, Cohere, Gemini, Azure OpenAI Vector Databases - FAISS, Pinecone, Weaviate, ChromaDB Serving & DevOps - Docker, FastAPI, Flask, GitHub Actions, Kubernetes Deployment Platforms - AWS Bedrock, Azure ML, GCP Vertex AI, Lambda, Streamlit Monitoring - Prometheus, MLflow, Langfuse, Weights & Biases Benefits: Competitive salary with performance incentives. Work with cutting-edge GenAI and LLM technologies. Build real-world products using state-of-the-art AI research. Job Types: Full-time, Permanent Pay: From ₹400,000.00 per year Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Schedule: Evening shift Monday to Friday Morning shift Night shift Rotational shift US shift Weekend availability Supplemental Pay: Performance bonus Quarterly bonus Yearly bonus Work Location: In person Application Deadline: 25/07/2025 Expected Start Date: 01/08/2025

Posted 2 weeks ago

Apply

1.0 years

1 - 2 Lacs

Lucknow

On-site

Job Title: Node.js Developer Company: Fortenet Skills Network Location: [Lucknow, U.P] Job Type: Full-time Job Description: We are looking for a passionate and dedicated Node.js Developer (Fresher) to join our dynamic team. As a Node.js Developer, you will be responsible for building and maintaining scalable applications that support our mission and vision. This is an excellent opportunity for fresh graduates who are eager to learn and grow in a supportive environment. Responsibilities: Develop and maintain server-side applications using Node.js. Collaborate with front-end developers to integrate user-facing elements with server-side logic. Write reusable, testable, and efficient code. Design and implement low-latency, high-availability, and performant applications. Participate in code reviews and contribute to the team’s best practices. Troubleshoot and debug applications to ensure smooth functionality. Stay up-to-date with the latest industry trends and technologies. Requirements: Bachelor’s degree in Computer Science, Information Technology, or a related field. Strong knowledge of JavaScript and Node.js. Basic understanding of front-end technologies, such as HTML, CSS, and JavaScript. Familiarity with RESTful APIs and asynchronous programming. Good problem-solving skills and a keen attention to detail. Strong communication and teamwork skills. Eagerness to learn and adapt to new technologies. Preferred Qualifications: Experience with database management systems like MongoDB or MySQL. Understanding of version control systems such as Git. Knowledge of frameworks such as Express.js. Basic knowledge of cloud services (e.g., AWS, Azure). What We Offer: Opportunity to work in a passionate and driven team. Hands-on training and mentorship. Career growth and development opportunities. Friendly and inclusive work environment. Competitive salary and benefits. How to Apply: Interested candidates are invited to submit their resume and a cover letter detailing their interest and qualifications for the position. Please include any relevant project work or portfolio links. Contact Information: Shubham Singh Head of Operations Fortenet SKills Network [shubham.singh@fortenet.in] [7054001058] Job Types: Full-time, Permanent Pay: ₹10,000.00 - ₹21,000.00 per month Experience: Ext JS: 1 year (Preferred) Node.js: 1 year (Required) Work Location: In person

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Requirements Description and Requirements Job Responsibilities Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex SQL & Sybase databases. Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and compliance. Identifies and resolves problem utilizing structured tools and techniques. Provides technical assistance and mentoring to staff in all aspects of database management; Consults and advises application development teams on database security, query optimization and performance. Writes scripts for automating DBA routine tasks and documents database maintenance processing flows per standards. Implement industry best practices while performing database administration task Work in Agile model with the understanding of Agile concepts Collaborate with development teams to provide and implement new features. Able to debug production issues by analyzing the logs directly and using tools like Splunk. Begin tackling organizational impediments Learn new technologies based on demand and help team members by coaching and assisting. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 10+ years of IT and Infrastructure engineering work experience. Experience (In Years) 10+ Years Total IT experience & 7+ Years relevant experience in SQL Server + Sybase Database Technical Skills Database Management: expert in managing and administering SQL Server, Azure SQL Server, and Sybase databases, ensuring high availability and optimal performance. Data Infrastructure & Security: Expertise in designing and implementing robust data infrastructure solutions, with a strong focus on data security and compliance. Backup & Recovery: Skilled in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity. Performance Tuning & Optimization: Adept at performance tuning and optimization of databases, leveraging advanced techniques to enhance system efficiency and reduce latency. Cloud Computing & Scripting: Experienced in cloud computing environments and proficient in operating system scripting, enabling seamless integration and automation of database operations. Management of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints Strong database analytical skills to improve application performance. Should have strong working Knowledge of database performance Tuning, Backup & Recovery, Infrastructure as a Code and Observability tools (Elastic). Must have experience of Automation tools and programming such as Ansible and Python. Strong knowledge in ITSM process and tools (ServiceNow). Ability to work 24*7 rotational shift to support the Database and Splunk platforms. Other Critical Requirements Excellent Analytical and Problem-Solving skills Experience managing geographically distributed and culturally diverse workgroups with strong team management, leadership and coaching skills Excellent written and oral communication skills, including the ability to clearly communicate/articulate technical and functional issues with conclusions and recommendations to stakeholders. Prior experience in handling state side and offshore stakeholders Experience in creating and delivering Business presentations. Demonstrate ability to work independently and in a team environment About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!

Posted 2 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Apply now » Graduate Engineer Trainee Company: NEC Corporation India Private Limited Employment Type Office Location: Noida, UP, IN, 201305 Work Location: Hybrid Req ID: 5232 Description Job-Title: Graduate Engineer Trainee Organization Name : NEC Corporation India Limited Reporting Relationship: Reporting to Group Technical Manager Role Summary Will support the planning, implementation, testing, and maintenance of network infrastructure under the guidance of senior engineers and technical managers. This entry-level role is designed to provide fresh engineering graduates with practical exposure to enterprise and field-level networking systems, including routing, switching setups, fiber networks, wireless communication, and IP-based systems typically deployed in large-scale infrastructure projects. Responsibilities Assist in the installation, configuration, and testing of routers, switches, firewalls, and network cabinets. Support senior engineers in field surveys, documentation, and network layout design (AutoCAD/Visio, etc.). Participate in the implementation of network topologies, including fiber backbone (OSP/ISP), Ethernet PoE, and wireless systems. Conduct basic network performance testing, including ping, throughput, latency, and signal quality checks. Help maintain accurate network inventory, IP schemes, and as-built documentation. Support Site Acceptance Testing (SAT) and assist during integration with central systems (SCADA, CCTV, PIDS, etc.). Learn and apply network standards and protocols (e.g., TCP/IP, MPLS, SNMP, VLANs, QoS). Report site activities, issues, and test results to the Project Engineer or Network Lead. Pre-requisites B.E./B.Tech in Computer Engineering, Electronics & Communication, or Information Technology (recent graduate). Basic knowledge of networking fundamentals, IP addressing, and OSI model. Strong desire to learn and grow in network and communication systems engineering & Cloud Infrastructure Specialization Description Responsible for developing and executing a Development Operations (DevOps) strategy to ensure quality software deployments and overall application health and performance. Optimizes relationships between the development, quality assurance and IT operations teams. Promotes communication, integration, and collaboration for enhanced software development productivity. Develops infrastructure to incorporate latest technology best practices and improve operational performance. Requires broad technical knowledge and experience across a variety of IT areas, including infrastructure, development, operations, and quality assurance. Level Description An experienced support level position that requires a basic knowledge of a given job area and tools, typically seen through work experience as well as vocational or technical training. Works under moderate supervision. Problems are typically of a routine nature, but may at times require interpretation or deviation from standard procedures. Communicates information that requires some explanation or interpretation to achieve business results for a given area of a department or function. Apply now »

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies