Home
Jobs

3241 Performance Tuning Jobs - Page 22

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

3 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a highly skilled .NET Backend Developer with expertise in C#, SQL Server, MongoDB, MySQL, and large-scale data processing as core skill . This role focuses on efficient data ingestion, structured data integration, and high-speed processing of large datasets while ensuring optimal memory and resource utilization. The ideal candidate should have deep experience in handling structured and unstructured data, multi-threaded processing, efficient database optimization, and real-time data synchronization to support scalable and performance-driven backend architecture . Key Focus Areas - Efficient Data Ingestion & Processing: Developing scalable pipelines to process large structured/unstructured data files . - Data Integration & Alignment: Merging datasets from multiple sources with consistency . - Database Expertise & Performance Optimization: Designing high-speed relational database structures for efficient storage and retrieval. - High-Performance API Development: Developing low-latency RESTful APIs to handle large data exchanges efficiently. - Multi-Threaded Processing & Parallel Execution: Implementing concurrent data processing techniques to optimize system performance. - Caching Strategies & Load Optimization: Utilizing in-memory caching & indexing to reduce I/O overhead. - Real-Time Data Processing & Streaming: Using message queues and data streaming for optimized data distribution. Required Skills & Technologies Backend Development: C#, .NET Core, ASP.NET Core Web API Data Processing & Integration: Efficient Data Handling, Multi-Source Data Processing Database Expertise: SQL Server MongoDB ,MySQL (Schema Optimization, Indexing, Query Optimization, Partitioning, Bulk Processing) Performance Optimization: Multi-threading, Parallel Processing, High-Throughput Computing Caching & Memory Management: Redis, Memcached, IndexedDB, Database Query Caching Real-Time Data Processing: Kafka, RabbitMQ, WebSockets, SignalR File Processing & ETL Pipelines: Efficient Data Extraction, Transformation, and Storage Pipelines Logging & Monitoring: Serilog, Application Insights, ELK Stack CI/CD & Cloud Deployments: Azure DevOps, Kubernetes, Docker Key Responsibilities 1. Data Ingestion & Processing Develop scalable data pipelines to handle high-throughput structured and unstructured data ingestion. Implement multi-threaded data processing mechanisms to optimize efficiency. Optimize memory management techniques to handle large-scale data operations. 2. Data Integration & Alignment Implement high-speed algorithms to merge and integrate datasets efficiently. Ensure data consistency and accuracy across multiple sources. Optimize data buffering & streaming techniques to prevent processing bottlenecks. 3. High-Performance API Development Design and develop high-speed APIs for efficient data retrieval and updates . Implement batch processing & streaming capabilities to manage large data payloads. Optimize API response times and query execution plans . 4. Database Expertise & Optimization (SQL Server , MongoDB ,MySql ) Design efficient database schema structures to support large-scale data transactions. Implement bulk data operations, indexing, and partitioning for high-speed retrieval . Optimize stored procedures and concurrency controls to support high-frequency transactions . Use sharding and distributed database techniques for enhanced scalability. 5. Caching & Load Balancing Deploy Redis / Memcached / IndexedDB caching to improve database query performance. Implement data pre-fetching & cache invalidation strategies for real-time accuracy. Optimize load balancing techniques for efficient request distribution. 6. Real-Time Data Synchronization & Streaming Implement event-driven architectures using message queues (Kafka, RabbitMQ, etc.) . Utilize WebSockets / SignalR for real-time data synchronization . Optimize incremental updates instead of full data reloads for better resource efficiency. Preferred Additional Experience Experience handling large-scale databases and high-throughout data environments . Expertise in distributed database architectures for large-scale structured data storage. Hands-on experience with query profiling & performance tuning tools . Apply arrow_forward_ios highly skilled .NET Backend Developer with expertise in efficient data ingestion, structured data integration, and high-speed processing of large datasets while ensuring optimal memory and resource utilization. handling structured and unstructured data, multi-threaded processing, efficient database optimization, and real-time data synchronization to support Efficient Data Ingestion & Processing: Developing scalable Data Integration & Alignment: Merging Database Expertise & Performance Optimization: Designing high-speed relational database structures for efficient storage and retrieval. High-Performance API Development: Developing low-latency RESTful APIs to handle large data exchanges efficiently. Multi-Threaded Processing & Parallel Execution: Implementing concurrent data processing techniques to optimize system performance. Caching Strategies & Load Optimization: Utilizing in-memory caching & indexing to reduce I/O overhead. Real-Time Data Processing & Streaming: Using message queues and data streaming for optimized data distribution. scalable data pipelines to handle high-throughput structured and unstructured data ingestion. multi-threaded data processing mechanisms to optimize efficiency. memory management techniques to handle large-scale data operations. high-speed algorithms to merge and integrate datasets efficiently. data consistency and accuracy across multiple sources. data buffering & streaming techniques to prevent processing bottlenecks. high-speed APIs for efficient batch processing & streaming capabilities to manage large data payloads. efficient database schema structures to support large-scale data transactions. bulk data operations, indexing, and partitioning for stored procedures and concurrency controls to support sharding and distributed database techniques for enhanced scalability. Redis / Memcached / IndexedDB caching to improve database query performance. data pre-fetching & cache invalidation strategies for real-time accuracy. load balancing techniques for efficient request distribution. event-driven architectures using WebSockets / SignalR for incremental updates instead of full data reloads for better resource efficiency. Expertise in distributed database architectures for large-scale structured data storage.

Posted 1 week ago

Apply

3.0 - 6.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Join our Team About this opportunity: Infra Cloud Team is looking for Technical Authority Expert (JS5) to Investigate, diagnose, and troubleshoot, Perform MSSQL Database Administration. Technical Proficiency in Oracle Database Administration as a Secondary skill is an added advantage. The candidate has to follow standard procedures for proper escalation of unresolved issues to the appropriate internal teams and talk to clients (internal) through a series of actions, either via phone, email, or chat, until the request is closed. What you will do: Proven experience as a Microsoft SQL Server DBA, managing enterprise-level database systems. Strong knowledge of Microsoft SQL Server, including database design, administration, performance tuning, and troubleshooting. Experience with database backup and recovery strategies, including disaster recovery planning and implementation. Proficiency in SQL programming and scripting languages. Familiarity with database security concepts and best practices. Understanding of high availability (HA) and failover clustering technologies. Excellent analytical and problem-solving skills, with the ability to diagnose and resolve complex database issues. Strong organizational and time management skills, with the ability to handle multiple tasks and priorities simultaneously. Microsoft SQL Server certification (eg, MCSA, MCSE) or equivalent industry certifications. Experience with cloud-based database platforms, such as Microsoft Azure SQL Database. Familiarity with data warehousing concepts and technologies. Knowledge of PowerShell or other automation tools for database administration tasks. Experience with performance monitoring and tuning tools, such as SQL Server Profiler or Extended Events. Effective communication and interpersonal skills, with the ability to collaborate with cross-functional teams and communicate technical concepts to non-technical stakeholders. The skills you bring: Oracle 10g, 11g, 12c, 18C, 19C versions (18 & 19 Additional benefits) Performance tuning (Optimizing Queries, Instance Tuning) Upgrade knowledge on oracle database upgrade from 12c to 19c and 11g to 12c. Installation, configuration and upgrading of Oracle server software and related products (clients, RAC 2 nodes, stand Along) Good Understanding of RAC (2 nodes or 3 nodes) and its services Understanding Active/Passive, Active-Active (ASM and Veritas Cluster) A good knowledge of the physical database design A good understanding of the underlying operating system (windows, unix, linux, solaris & AIX) Different OS few important commands to check the server load (top, topas sar ,vmstat Partitioning (Partition Tables and it s type understanding Range, Hash etc) and partition pruning. Indexing (Local and Global Index, its benefits and drawbacks) Worked on Heavy databases 100+ TB (Filesystem and ASM) Adding Disk in ASM and Adding datafiles in RAC, tempfiles etc. Why join Ericsson? What happens once you apply? Primary country and city: India (IN) || [[location_obj]] Req ID: 755095

Posted 1 week ago

Apply

2.0 - 3.0 years

4 - 5 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title: Java APIGEE Developer - Bangalore About Us Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO? You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. Key skills: Core Java, SpringBoot, MicroServices, APIGEE, RestAPI, Devops and AWS Location - Bangalore (Hybrid - 3 days WFO) Shift Timings: 12:30pm-9:30pm Looking only for immediate joiners Technical Requirement: Role Description Responsibilities Work closely with business units, application teams, infrastructure areas and vendors to identity, review and evaluate the solution requirements Investigate and propose strategic fits for virtualization, consolidation and rationalization solution opportunities within the infrastructure or business Propose changes to the technical design solutions as applicable Evaluate and align strategic fit solutions across platforms and solutions specific to system hardware and software technologies Understand, participate, review and influence long term capacity planning and technology investments Skills Must Have Skills Bachelor s Degree from an accredited university in Computer Science, Engineering or in a Technology related field, OR equivalent through a combination of education and/or technology experience Proficient Angular and Frontend Development skill Good experience in Core Java, SpringBoot, MicroServices, REST API and AWS AWS infrastructure experience required Typically focuses on designing, building, and managing APIs using the Apigee platform. This involves creating API proxies, implementing security policies, and integrating with backend systems. The role often requires strong Java development skills, experience with RESTful web services, and knowledge of API management concepts. Java Backend skills Core Java & OOP - Strong understanding of Core Java, upto version 21. Spring Framework - Expertise in Spring Boot, Spring MVC, Spring Security, Spring Data JPA, and microservices architecture. Spring Boot & Spring Cloud - Expertise in building scalable microservices with service discovery (Eureka), API Gateway (Spring Cloud Gateway), and configuration management (Spring Cloud Config). Database & ORM - Proficiency in SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Redis) with Hibernate/JPA for ORM. RESTful APIs & Messaging - Developing secure REST APIs, GraphQL, WebSockets, and using message brokers (Kafka, RabbitMQ). DevOps & Testing - CI/CD pipelines, Docker, Kubernetes, JUnit, Mockito, integration testing, and performance tuning. If you are keen to join us, you will be part of an organization that values your contributions, recognizes your potential, and provides ample opportunities for growth. For more information, visit www.capco.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube.

Posted 1 week ago

Apply

10.0 - 15.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Title: SAP S/4HANA Technical Lead - ABAP & Fiori About GlobalFoundries GlobalFoundries is a leading full-service semiconductor foundry providing a unique combination of design, development, and fabrication services to some of the world s most inspired technology companies. With a global manufacturing footprint spanning three continents, GlobalFoundries makes possible the technologies and systems that transform industries and give customers the power to shape their markets. For more information, visit www.gf.com . Introduction: This is a Solution delivery lead role for SAP S/4HANA - ABAP & Fiori based out of Global Foundries office in Bangalore, India and requires a mix of technical expertise, leadership abilities, team management skills and business acumen to lead development strategy, solution design in collaboration with implementation partners, GFIT and business users. Here are some key responsibilities and qualifications for this role: Responsibilities : Lead S/4HANA ABAP & Fiori development strategy & solution design to ensure quality, consistency, and efficiency. Conduct code reviews, provide technical guidance, and mentor team members. Ensure adherence to SAP best practices and coding standards. Collaborate with functional teams to understand business requirements and translate them into technical solutions - SAP ABAP RICEFW objects. Implement enhancements using BAdIs, user exits, and enhancement spots. Develop and support SAP Fiori applications, including custom apps and templates. Implement and optimize OData services and SAP Gateway configurations. Develop and maintain ALE/IDoc, RFCs, and BAPI-based integrations. Guide team in optimization of existing ABAP programs for performance. Stay updated with the latest SAP technologies and trends. Other Responsibilities: Perform all activities in a safe and responsible manner and support all Environmental, Health, Safety & Security requirements and programs Required Qualifications: Bachelors degree in computer science, IT, or a related field. Proven experience in SAP ABAP development with overall 10 years min. exp with at least 5 years of lead role esp. for S/4HANA. Strong knowledge of SAP modules and integration points. Proficiency in Object-Oriented ABAP, Enhancement Framework, BAPIs, BADIs, Smart Forms, Workflows and debugging tools. Proficiency in creating and consuming OData services for Fiori applications. Advanced debugging capabilities to identify and guide teams for issues resolution in both ABAP and Fiori applications. Strong analytical mindset & problem-solving skills to address technical and business challenges. Competence in resource allocation, timeline management, and task prioritization. Effective communication and leadership abilities. Hands on experience in implementing best practices using S4HANA ABAP like Code Push Down Techniques like CDS View, AMDP, S4HANA ABAP, Performance Tuning / optimization, ATC Experience working on IDEs like Eclipse, Business Application Studio etc. & code migration transport options through the landscape Experience in BTP Side by Side extensions, In App Extensions and configuration will be preferred. Relevant SAP certifications will be preferred. GlobalFoundries is an equal opportunity employer, cultivating a diverse and inclusive workforce. We believe having a multicultural workplace enhances productivity, efficiency and innovation whilst our employees feel truly respected, valued and heard. As an affirmative employer, all qualified applicants are considered for employment regardless of age, ethnicity, marital status, citizenship, race, religion, political affiliation, gender, sexual orientation and medical and/or physical abilities. All offers of employment with GlobalFoundries are conditioned upon the successful completion of background checks, medical screenings as applicable and subject to the respective local laws and regulations. To ensure that we maintain a safe and healthy workplace for our GlobalFoundries employees, please note that offered candidates who have applied for jobs in India will have to be fully vaccinated prior to their targeted start date. For new hires, the appointment is contingent upon the provision of a copy of their COVID-19 vaccination document, subject to any written request for medical or religious accommodation.

Posted 1 week ago

Apply

10.0 - 15.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Total Experience: 10+ Years Relevant Experience: 10+ Years Rate: 11000 INR/day Interview Mode: One F2F Is mandatory to attend ( Kindly avoid the candidate who cannot attend one F2F round ) Candidate should be ready to join as a subcontractor. If relevant & total years of experience is not as mentioned, will be straightforward reject. Profile with higher rate will be straightforward reject. Please treat the below requirement as critical and share 2 quality profiles those are really interested in subcon role. Kindly go through below instruction very clearly and then do the submission with requested details Vendors should check the requirement clearly and not to send the profiles just by key word search Vendors should check the availability and interest of the resource to join as Subcon Kindly submit profiles within the rate card Ensure there is no ex- Infosys Emp profile submission as we have 6 months of cooling period We need only top 1-2 quality profiles, avoid multiple mail thread on profiles submission. ECMS Req # 514780 Number of Openings 1 Duration of Hiring 12 Months Relevant and Total years of experience 10+ Detailed job description - Skill Set: Create, test, and implement enterprise-level apps with Snowflake Design and implement features for identity and access management Create authorization frameworks for better access control Implement Client query optimization, major security competencies with encryption Solve performance issues and scalability issues in the system Transaction management with distributed data processing algorithms Possess ownership right from start to finish Migrate solutions from on-premises setup to cloud-based platforms Understand and implement the latest delivery approaches based on data architecture Project documentation and tracking based on understanding user requirements Perform data integration with third-party tools including architecting, designing, coding, and testing phases Manage documentation of data models, architecture, and maintenance processes Continually review and audit data models for enhancement Performance tuning, user acceptance training, application support Maintain confidentiality of data Risk assessment, management, and mitigation plans Regular engagement with teams for status reporting and routine activities Migration activities from one database to another or on-premises to cloud Mandatory Skills(ONLY 2 or 3) Snowflake Developer Vendor Billing range in local currency (per day) 11000 INR/DAY Work Location Chennai, Hyderabad, Bangalore, or Mysore Infosys location WFO/WFH/Hybrid WFO Hybrid WFO Is there any working in shifts from standard Daylight (to avoid confusions post onboarding) YES/ NO NO Mode of interview F2F BGCHECK before or After onboarding Post Onboarding

Posted 1 week ago

Apply

1.0 - 6.0 years

20 - 25 Lacs

Chennai

Work from Office

Naukri logo

Join Zuora s high-impact Operations team, where you'll be instrumental in maintaining the reliability, scalability, and performance of our SaaS platform. This role involves proactive service monitoring, incident response, infrastructure service management, and ownership of internal and external shared services to ensure optimal system availability and performance. You will work alongside a team of skilled engineers dedicated to operational excellence through automation, observability, and continuous improvement. In this cross-functional role, you'll collaborate daily with Product Engineering & Management, Customer Support, Deal Desk, Global Services, and Sales teams to ensure a seamless and customer-centric service delivery model. As a core member of the team, you'll have the opportunity to design and implement operational best practices, contribute to service provisioning strategies, and drive innovations that enhance the overall platform experience. If you're driven by solving complex problems in a fast-paced environment and are passionate about operational resilience and service reliability, we d love to hear from you. Our Tech Stack: Linux Administration, Python, Docker, Kubernetes, MySQL, Kafka, ActiveMQ, Tomcat App & Web, Oracle, Load Balancers, REDIS Cache, Debezium, AWS, WAF, LBs, Jenkins, GitOps, Terraform, Ansible, Puppet, Prometheus, Grafana, Open Telemetry In this role you'll get to Architect and implement intelligent automation workflows for infrastructure lifecycle management, including self-healing systems, automated incident remediation, and configuration analomy detection using Infrastructure as Code (IaC) and AI-driven tooling. Leverage predictive monitoring and anomaly detection techniques powe'red by AI/ML to proactively assess system health, optimize performance, and preempt service degradation or outages. Lead complex incident response efforts, applying deep root cause analysis (RCA) and postmortem practices to drive long-term stability, while integrating automated detection and remediation capabilities. Partner with development and platform engineering teams to build resilient CI/CD pipelines, enforce infrastructure standards, and embed observability and reliability into application deployments. Identify and eliminate reliability bottlenecks through automated performance tuning, dynamic scaling policies, and advanced telemetry instrumentation. Maintain and continuously evolve operational runbooks by incorporating machine learning insights, updating playbooks with AI-suggested resolutions, and identifying automation opportunities for manual steps. Stay abreast of emerging trends in AI for IT operations (AIOps), distributed systems, and cloud-native technologies to influence strategic reliability engineering decisions and tool adoption. Who we're looking for Hands-on experience with Linux Servers Administration and Python Programming. Deep experience with containerization and orchestration using Docker and Kubernetes, managing highly available services at scale. Working with messaging systems like Kafka and ActiveMQ, databases like MySQL and Oracle, and caching solutions like REDIS. Understands and applies AI/ML techniques in operations, including anomaly detection, predictive monitoring, and self-healing systems. Has a solid track record in incident management, root cause analysis, and building systems that prevent recurrence through automation. Is proficient in developing and maintaining CI/CD pipelines with a strong emphasis on observability, performance, and reliability. Monitoring and observability using Prometheus, Grafana, and OpenTelemetry, with a focus on real-time anomaly detection and proactive alerting. Is comfortable writing and maintaining runbooks and enjoys enhancing them with automation and machine learning insights. Keeps up-to-date with industry trends such as AIOps, distributed systems, SRE best practices, and emerging cloud technologies. Brings a collaborative mindset, working cross-functionally with engineering, product, and operations teams to align system design with business objectives. 1+ years of experience working in a SaaS environment. Nice to Have: Red Hat Certified System Administrator (RHCSA) - Red Hat AWS Certification Certified Associate in Python Programming (PCAP) - Python Institute Docker Certified Associate (DCA) or Certified Kubernetes Administrator (CKA) Good knowledge of Jenkins Advanced certifications in SRE or related fields As part of our commitment to building an inclusive, high-performance culture where ZEOs feel inspired, connected and valued, we support ZEOs with: Competitive compensation, corporate bonus program, performance rewards and retirement programs Medical insurance Generous, flexible time off Paid holidays, we'llness days and company wide end of year break 6 months fully paid parental leave Learning & Development stipend Opportunities to volunteer and give back, including charitable donation match Free resources and support for your mental we'llbeing

Posted 1 week ago

Apply

10.0 - 15.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

Leading the design and implementation of high-scale, cloud-native data pipelines for real-time and batch workloads. Collaborating with product managers, architects, and backend teams to translate business needs into secure and scalable data solutions. Integrating big data frameworks (like Spark, Kafka, Flink) with cloud-native services (AWS/GCP/Azure) to support security analytics use cases. Driving CI/CD best practices, infrastructure automation, and performance tuning across distributed environments. Evaluating and piloting the use of AI/LLM technologies in data pipelines (eg, anomaly detection, metadata enrichment, automation). Our Engineering team is driving the future of cloud security developing one of the world s largest, most resilient cloud-native data platforms. At Skyhigh Security, we're enabling enterprises to protect their data with deep intelligence and dynamic enforcement across hybrid and multi-cloud environments. As we continue to grow, we're looking for a Principal Data Engineer to help us scale our platform, integrate advanced AI/ML workflows, and lead the evolution of our secure data infrastructure. Responsibilities: As a Principal Data Engineer, you will be responsible for: Leading the design and implementation of high-scale, cloud-native data pipelines for real-time and batch workloads. Collaborating with product managers, architects, and backend teams to translate business needs into secure and scalable data solutions. Integrating big data frameworks (like Spark, Kafka, Flink) with cloud-native services (AWS/GCP/Azure) to support security analytics use cases. Driving CI/CD best practices, infrastructure automation, and performance tuning across distributed environments. Evaluating and piloting the use of AI/LLM technologies in data pipelines (eg, anomaly detection, metadata enrichment, automation). Evaluate and integrate LLM-based automation and AI-enhanced observability into engineering workflows. Ensure data security and privacy compliance. Mentoring engineers, ensuring high engineering standards, and promoting technical excellence across teams. What we're Looking For (Minimum Qualifications) 10+ years of experience in big data architecture and engineering, including deep proficiency with the AWS cloud platform. Expertise in distributed systems and frameworks such as Apache Spark, Scala, Kafka, Flink, and Elasticsearch, with experience building production-grade data pipelines. Strong programming skills in for building scalable data applications. Hands-on experience with ETL tools and orchestration systems. Solid understanding of data modeling across both relational (PostgreSQL, MySQL) and NoSQL (HBase) databases and performance tuning. What Will Make You Stand Out (Preferred Qualifications) Experience integrating AI/ML or LLM frameworks (eg, LangChain, LlamaIndex) into data workflows. Experience implementing CI/CD pipelines with Kubernetes, Docker, and Terraform. Knowledge of modern data warehousing (eg, BigQuery, Snowflake) and data governance principles (GDPR, HIPAA). Strong ability to translate business goals into technical architecture and mentor teams through delivery. Familiarity with visualization tools (Tableau, Power BI) to communicate data insights, even if not a primary responsibility. We believe that the best solutions are developed by teams who embrace each others unique experiences, skills, and abilities. We work hard to create a dynamic workforce where we encourage everyone to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Medical, Dental and Vision Coverage

Posted 1 week ago

Apply

4.0 - 9.0 years

5 - 9 Lacs

Coimbatore

Work from Office

Naukri logo

Lead the migration of dashboards and reports from QlikView/Qlik Sense to Power BI , ensuring consistency in data logic, design, and user experience. Design, build, and optimize scalable, interactive Power BI dashboards to support key business decisions. Write complex SQL queries for data extraction, transformation, and validation. Collaborate with business users, analysts, and data engineers to gather requirements and deliver analytics solutions. Leverage data modeling and DAX to build robust and reusable datasets in Power BI. Perform data validation and QA to ensure accuracy during and post-migration. Work closely with Snowflake-based datasets or assist in transitioning data sources to Snowflake where applicable. Translate healthcare data metrics into actionable insights and visualizations. Required Skills: 4+ years of experience in Business Intelligence or Data Analytics roles. Strong expertise in Power BI including DAX, Power Query, custom visuals, row-level security. Hands-on experience with QlikView or Qlik Sense , especially in migration scenarios. Advanced proficiency in SQL complex joins, performance tuning, and stored procedures. Exposure to Snowflake or similar cloud data platforms (e. g. , Redshift, BigQuery). Experience working with healthcare datasets (claims, clinical, EMR/EHR data, etc ) is a strong advantage. Strong analytical and problem-solving mindset. Effective communication and stakeholder management skills.

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Gurugram

Work from Office

Naukri logo

Bachelor degree in computer science, information technology, or related fields We are looking for a candidate who is a subject matter expert with hands-on experience supporting PostgreSQL databases. This role will focus on production database administration for the PostgreSQL database. Install, monitor, and maintain PostgreSQL software (Windows Linux), implement monitoring and alerting, and backup/recovery processes. Fulfill user requests ranging from access control, backup, restore, refresh to non-production to performance tuning. At least 3 years of hands-on experience in PostgreSQL MySQL Database Administration/Engineering.

Posted 1 week ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Pune

Work from Office

Naukri logo

The Senior Associate, Solution Engineering has a significant role in developing large-scale enterprise level backend-end solutions and ensuring that functionality is implemented with a focus on code optimization and performance. This role has significance in the designing and architecture of Settlement Transaction processing application which serves as critical input to global settlement system. Role Responsibilities Design, develop, and maintain solutions using Oracle WebCenter Content (WCC/UCM) including custom components, metadata models, and workflows. Build and deploy SOA composites using Oracle SOA Suite components such as BPEL, Mediator, Human Task, and Adapters. Develop custom WebCenter Content components and services using Java, Idoc Script, and WCC APIs. Integrate WCC and SOA solutions with Oracle E-Business Suite (EBS), Oracle Cloud ERP, or other enterprise systems Implement metadata-driven document management solutions for structured and unstructured content in WCC. Create and consume web services (SOAP/REST) within SOA composites and WCC integrations. Perform unit testing and support QA activities, ensuring code meets functional and performance requirements. Participate in solution design reviews and technical discussions to ensure scalable and maintainable architecture. Troubleshoot issues across WCC and SOA environments, including logs, performance tuning, and error handling. Maintain technical documentation including solution architecture, deployment procedures, and support guides. Role Requirements bachelors degree in Computer Science, Information Systems, or related field. 5+ years of hands-on experience in developing solutions using Oracle WebCenter Content (UCM) and Oracle SOA Suite Strong programming skills in Oracle BPEL, Java, PL/SQL, XML, XSLT, and scripting languages (eg, WLST, shell). Experience building custom components and Idoc Script logic in WebCenter Content. Expertise in SOA Suite development, including BPEL, Mediator, Adapters, and Human Task components. Familiarity with Oracle WebLogic Server, including deployment and configuration of SOA/WCC applications. Experience working with Oracle Document Capture/Imaging solutions is a plus. Good understanding of integration techniques, web service security, and exception handling in composite applications. Exposure to Agile development practices, CI/CD tools (eg, Jenkins, Git), and test automation frameworks. Excellent communication and teamwork skills, with a focus on collaboration and delivery in multi-disciplinary teams. Your specific benefits include: Employees Provident Fund [EPF] Gratuity Payment Public holidays Annual Leave, Sick leave, Compensatory leave, and Maternity / Paternity leave Annual Health Check up Hospitalization Insurance Coverage (Mediclaim) Group Life Insurance, Group Personal Accident Insurance Coverage, Business Travel Insurance Cab Facility Relocation Benefit

Posted 1 week ago

Apply

2.0 - 8.0 years

5 - 8 Lacs

Gurugram

Work from Office

Naukri logo

1. *Database Design and Implementation*: Design, implement, and maintain databases to meet organizational needs. 2. *Database Administration*: Manage and administer databases, including data modeling, normalization, and optimization. 3. *Data Security and Backup*: Ensure data security, integrity, and availability by implementing backup and recovery procedures. 4. *Performance Monitoring and Optimization*: Monitor database performance, identify bottlenecks, and optimize database configuration for improved performance. 5. *Troubleshooting and Support*: Troubleshoot database issues, provide technical support, and resolve database-related problems. 6. *Data Migration and Integration*: Plan, coordinate, and execute data migrations and integrations with other systems. 7. *Database Upgrades and Patches*: Apply database upgrades, patches, and fixes to ensure database stability and security. 8. *Documentation and Reporting*: Maintain accurate documentation, reports, and metrics on database performance, security, and usage. Requirements 1. *Technical Skills*: Proficiency in database management systems (DBMS) such as Oracle, Microsoft SQL Server, MySQL, or PostgreSQL. 2. *Database Design and Development*: Knowledge of database design principles, data modeling, and normalization. 3. *SQL and Query Optimization*: Strong understanding of SQL, query optimization, and performance tuning. 4. *Data Security and Backup*: Knowledge of data security best practices, backup and recovery procedures, and disaster recovery planning. 5. *Problem-Solving and Troubleshooting*: Excellent problem-solving and troubleshooting skills, with the ability to analyze complex database issues. 6. *Communication and Collaboration*: Effective communication and collaboration skills, with the ability to work with technical and non-technical stakeholders. Nice to Have 1. *Certifications*: Database certifications such as Oracle Certified Professional (OCP), Microsoft Certified Database Administrator (MCDA), or Certified Data Administrator (CDA). 2. *Cloud Experience*: Experience with cloud-based databases such as Amazon RDS, Google Cloud SQL, or Microsoft Azure Database Services. 3. *Data Warehousing and Business Intelligence*: Knowledge of data warehousing and business intelligence concepts, including data modeling, ETL, and data visualization.

Posted 1 week ago

Apply

11.0 - 12.0 years

32 - 40 Lacs

Bengaluru

Work from Office

Naukri logo

Job Summary: We are seeking a seasoned Lead Data Engineer with deep expertise in the Azure Data Stack and hands-on experience in Microsoft Fabric . The ideal candidate will have led or been a key part of at least one Fabric implementation and possess a strong background in designing, developing, and delivering enterprise-grade data engineering solutions. This role will be pivotal in driving data strategy, governance, and platform modernization initiatives. Key Responsibilities: Lead and architect end-to-end data engineering solutions using Microsoft Fabric Work closely with cross-functional teams to define data strategy, architecture, and integration patterns Design and implement scalable data pipelines, dataflows, and data lakes using Azure Data Services Oversee and mentor a team of data engineers and ensure best practices are followed Ensure data governance, quality, and security standards are implemented and adhered to Collaborate with stakeholders to define KPIs and support advanced analytics and reporting needs Must-Have Skills: 12-14 years of overall experience in data engineering and solution architecture At least one full-cycle implementation experience in Microsoft Fabric Deep understanding and hands-on experience with Azure Data Services , including: Azure Data Factory (ADF) Azure Synapse Analytics Azure Data Lake Azure SQL / Synapse SQL Experience with Delta Lake , Spark , and Lakehouse architecture is highly desirable Expertise in modern data integration and transformation techniques Strong understanding of data modeling, ETL/ELT best practices, and performance tuning Preferred Qualifications: Bachelor s degree in Engineering (B.Tech/BE) or equivalent in Computer Science, Information Technology, or a related field Microsoft Azure certifications (e.g., DP-203, DP-500, or equivalent) Prior experience leading geographically distributed teams Familiarity with DevOps, CI/CD practices in data engineering projects EXPERIENCE 11-12 Years SKILLS Primary Skill: Data Engineering Sub Skill(s): Data Engineering Additional Skill(s): databricks, Azure Data Factory

Posted 1 week ago

Apply

9.0 - 14.0 years

5 - 6 Lacs

Pune

Work from Office

Naukri logo

We are looking forward to hire SAP ABAP HANA Professionals in the following areas : : Experience required: 9+ years. The Sr Solution Architect, D&T, HANA ABAP possesses knowledge and experience to work with moderate guidance. The position is responsible for the overall design, development, and implementation of SAP S/4 HANA Applications. The role is proficient in the areas of solution design, development, performance tuning and troubleshooting. Working in a global environment, this role has a hands-on approach capable of working across teams and time zones. This role must have hands-on experience in CDS views and OData Development with SAP HANA Studio, and SAP HANA in Eclipse based environment. Also, good experience in ABAP OO programming. Essential Duties and Responsibilities: Understanding of overall vision, business needs and the solution architecture for SAP applications. Uses this knowledge to solve reasonably complex problems. Utilizes knowledge of best practices and acts as a SME to provide the technical solution design. Works closely with the process team to understand the business requirements, review the functional specs, develop technical specs, build the application, unit test the application and support the process team during integration testing With moderate direction designs and builds applications with best practices in mind. Investigates and corrects production issues. Interact and involve with supports and training other team members. Other duties as assigned Education and Experience: Bachelor s degree preferably in Engineering 9+ years of relevant experience in the areas specified under Essential Duties and Responsibilities Knowledge, Skills and Abilities: Experience gathering requirements and providing technical solution design. Practical/Project/Hands-on Experience in SAP HANA Studio or Eclipse, ABAP Programming. Good knowledge in creating CDS views and ODATA services. Experience in Building analytical privileges using DCL Experience in writing SQL Script & in implementing Database Stored Procedures, Database Procedure Proxies Must have excellent analytical, logical problem solving and technological skills. Ability to work in a global and virtual environment and to effectively prioritize and execute tasks in a high-pressure environment. Ability to work with basic guidance in a fast-paced & complex environment with a self-motivated work ethic; utilize sound judgment with an ability to manage multiple priorities with a sense of urgency. Able to be aware of all relevant SOPs as per Company policy as they are related to the position covered by this Preferred Qualifications: Sound knowledge of SAP business processes Physical and Travel Requirements Office environment 10% of travel requirement Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture

Posted 1 week ago

Apply

8.0 - 10.0 years

50 - 55 Lacs

Bengaluru

Work from Office

Naukri logo

Number of Openings 5 ECMS Request no 529895 |529898 |529899 | 529947| 530014 Total Yrs. of Experience* 8-10 YRS Relevant Yrs. of experience* 8 + Yrs Job Description ABAP cloud, CDS, Enhancement frame work ,RAP Methodology ,Adobe forms , Act as the technical expert, ensuring quick turnaround in identifying application issues and providing timely solutions. Develop detailed plans and accurate estimates for completion of build, system testing and implementation phases of project. Transform business function requirements into technical program specs to code, test and debug programs. Develop, code, document and execute unit tests, systems, integration and acceptance tests and testing tools for functions of high complexity. Perform performance tuning to improve performance over multiple functions Engage other technical team members in the design, delivery, and deployment of solutions. Ensure integration system activities including monitoring the technical architecture (particularly scalability, availability and archiving) meet all SLAs. Manage operational support, performing work during change windows as well as providing on call support as required by the team. Undertake performance tuning activities for SAP integration activities. Mandatory skill SAP ABAP Desired skills* SAP ABAP Domain* SAP ABAP Vendor billing rate* 12-12500K Precise Work Location Offshore BG Check Post Onboarding Delivery Anchor for screening, interviews and feedback* giri_puppala@infosys.com Is there any working in shifts from standard Daylight (to avoid confusions post onboarding) * Shift may vary

Posted 1 week ago

Apply

5.0 - 8.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Number of Openings 2 ECMS ID in sourcing stage 321700Y25 , 321701Y25 Assignment Duration 6 Months Total Yrs. of Experience 5-8 Years Relevant Yrs. of experience 5-8 Years Detailed JD (Roles and Responsibilities) Seeking a Senior SAP ABAP on HANA consultant with strong expertise in ODATA, CDS Views, and SAP BTP with overall 8+ years of experience. The ideal candidate will design, develop, and optimize SAP solutions leveraging HANA capabilities, Integrate with SAP BTP and deliver modern APIs and services. Design and develop complex ABAP programs optimized for SAP HANA. Create and maintain CDS Views and ODATA services for SAP Fiori and external integrations. Develop and deploy applications on SAP BTP, including integration with S/4HANA and third-party systems. Collaborate with functional and technical teams to deliver end-to-end solutions. Optimize performance of custom code and troubleshoot issues in HANA-based environments. Implement best practices for code quality, security, and reusability. Support system upgrades, migrations, and patching activities. Document technical specifications and provide knowledge transfer to team members. Mandatory skills Strong experience in ABAP programming for HANA (AMDP, CDS, performance tuning). - Expertise with Object oriented programming(OOABAP). - Hands-on expertise with ODATA service creation and consumption. - Experience with SAP BTP (Business Technology Platform) services and integration. - Good understanding of SAP Fiori/UI5 development and extensibility. - Hands-on expertise with Design, Configuring & Maintaining Workflows. - Experience with S/4HANA implementations and upgrades. - Familiarity with SAP Cloud Connector and security concepts. - Excellent troubleshooting and analytical skills. - Address and resolve complex technical issues. - In-depth and advance knowledge for SAP Development tools across SAP modules. Desired/ Secondary skills Experience with SAP Integration Suite, API Management, or SAP Analytics Cloud. Workflow Troubleshooting and BRF+. Knowledge of DevOps tools and CI/CD pipelines for SAP development. Knowledge of Agile Methodologies. SAP certification(s) in relevant areas. Domain SAP ABAP Max Vendor Rate in Per Day (Currency in relevance to work location) 9000-10000/- INR Work Location given in ECMS ID Hyderabad STP (Gachibowli) / Pune/Kolkata WFO/WFH/Hybrid WFO WFO (All 5 days a week) BG Check (Before OR After onboarding) Post Onboarding Is there any working in shifts from standard Daylight (to avoid confusions post onboarding) YES/ NO 12PM-9:30 PM shift

Posted 1 week ago

Apply

5.0 - 8.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Number of Openings 1 ECMS ID in sourcing stage 321702Y25 Assignment Duration 6 Months Total Yrs. of Experience 5-8 Years Relevant Yrs. of experience 5-8 Years Detailed JD (Roles and Responsibilities) Seeking a Senior SAP BODS Consultant with strong expertise in Data Integration and Data migration with 8 + years of experience. Build and develop BODS Solution for data Integration, ETL Processes based on the business requirements. Develop and Implement ETL workflows, dataflows and other transforms. Optimize and troubleshooting BODS Performance including performance tuning. Maintaining SAP BODS code using best practices and standards. Analyze and debug the BODS code and find out the root cause for the specific Issues. Testing and validating in the lower environment before migrating the batch jobs. Mandatory skills At least 8+ years of experience in data migration/data Integration using the SAP BODS. Experience in extracting data from multiple sources and load the data into SAP S/4 HANA. Expertise in using the BODS transforms while developing the Job like Query, Validation, Pivot, case, merge etc. Desired/ Secondary skills Good experience in Performing code migration through central repositories using best practices. Strong knowledge in SQL code to perform analysis on the data. Good working expertise in building the reconciliation jobs after the load. Domain SAP BODS Max Vendor Rate in Per Day (Currency in relevance to work location) 9000-10000/- INR Work Location given in ECMS ID Hyderabad STP (Gachibowli) / Pune/Kolkata WFO/WFH/Hybrid WFO WFO (All 5 days a week) BG Check (Before OR After onboarding) Post Onboarding Is there any working in shifts from standard Daylight (to avoid confusions post onboarding) YES/ NO 12PM-9:30 PM shift

Posted 1 week ago

Apply

3.0 - 8.0 years

6 - 7 Lacs

Mumbai

Work from Office

Naukri logo

Job description Write data structure, queries, store procedures, PL SQL commands and various important documents for databases. Evaluate hardware and software requirements based on specific requests. Enroll user and give privileges to users. Control access and permissions to the database users. He shall be responsible for data analysis and provide best query execution plan and resolve the performance issues by proper indexing, data retrieval and storage mechanisms. Manage database parameters and monitor performance of the database. Maintain data format and standards. Implement Backup Policies for disaster management. So that in case of any type of error such as hardware/software failure, data corruption, virus etc. Recover database in case of disaster and make it available to users quickly. Optimizing database for its better performance, provide database resource management features that can help controlling resource allocation. Vendor may use any tool for effective database management as well as early alerts. Download, Install, and configure patches as and when required. Maintain multiple production database systems or create testing and development database system similar to production database systems, roll out existing installation to other hosts. Create and develop strong disaster management policy as per requirement of SLA. Periodic Performance Tuning and proactive database tasks and maintain proper documentation for future usage. Responsibilities: Write data structure, queries, store procedures, PL SQL commands and various important documents for databases. Evaluate hardware and software requirements based on specific requests. Control access and permissions to the database users. He shall be responsible for data analysis and provide best query execution plan and resolve the performance issues by proper indexing, data retrieval and storage mechanisms. Manage database parameters and monitor performance of the database. Maintain data format and standards. Implement Backup Policies for disaster management. So that in case of any type of error such as hardware/software failure, data corruption, virus etc. Recover database in case of disaster and make it available to users quickly. Optimizing database for its better performance, provide database resource management features that can help controlling resource allocation. Vendor may use any tool for effective database management as well as early alerts. Download, Install, and configure patches as and when required. Maintain multiple production database systems or create testing and development database system similar to production database systems, roll out existing installation to other hosts. Create and develop strong disaster management policy as per requirement of SLA. Periodic Performance Tuning and proactive database tasks and maintain proper documentation for future usage. What we are looking for: Any Engineering Graduate (B.E. or B.Tech in computer/IT/EEE) with 3+yrs of exp In Oracle databse administration Share this job Job Skills Apply now Apply for this job and hear back from the hiring manager in under 48 hours! Get In touch Are you interested in working with us? * Hot Links Reach Us 78, Ratnajyot Industrial Estate, Irla Lane, Vile Parle (W), Mumbai 400056. INDIA. Call Us

Posted 1 week ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Ahmedabad

Work from Office

Naukri logo

Job Description: 5-7 years of hands-on experience in SQL development, query optimization, and performance tuning. Expertise in ETL tools (SSIS, Azure ADF, Databricks, Snowflake or similar) and relational databases (SQL Server, PostgreSQL, MySQL, Oracle). Strong understanding of data warehousing concepts, data modeling, indexing strategies, and query execution plans. Proficiency in writing efficient stored procedures, views, triggers, and functions for large datasets. Experience working with structured and semi-structured data (CSV, JSON, XML, Parquet). Hands-on experience in data validation, cleansing, and reconciliation to maintain high data quality. Exposure to real-time and batch data processing techniques. Nice-to-have: Experience with Azure/Other Data Engineering (ADF, Azure SQL, Synapse, Databricks, Snowflake), Python, Spark, NoSQL databases, and reporting tools like Power BI or Tableau. Strong problem-solving skills and the ability to troubleshoot ETL failures and performance issues. Ability to collaborate with business and analytics teams to understand and implement data requirements.

Posted 1 week ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Ahmedabad

Work from Office

Naukri logo

Job Description: 5+ years of experience in BI development with strong hands-on expertise in: SQL Server / T-SQL and other relational databases. ETL tools such as SSIS, Azure Data Factory , Informatica, or similar. Cloud data services - ideally in Azure, but AWS/GCP is also considered. Power B I - including DAX, Power Query, data modeling, and deployment. Solid understanding of data warehousing concepts, dimensional modeling, and performance tuning. Experience with source control, CI/CD for BI projects, and agile methodologies. Excellent communication and problem-solving skills. Ability to work independently and manage multiple tasks in a dynamic environment

Posted 1 week ago

Apply

4.0 - 9.0 years

5 - 9 Lacs

Chennai, Guindy

Work from Office

Naukri logo

Overview Orace DBA (Primary Ski) MySQL DBA (secondary ski) Responsibiities Insta, configure, and upgrade Orace database software.Appy quartery patches for Grid and Orace homeConfigure and maintain standby database.Monitor database performance and impement optimization strategies.Ensure database security and compiance with organizationa poicies.Perform reguar backups and recovery operations.Coaborate with deveopers to design and impement database soutions.Troubeshoot and resove database-reated issues prompty.Deveop and maintain database documentation and standards.Pan and execute database migrations and upgrades.Setup Orace goden gate both uni-directiona and bidirectiona.Maintain Orace Goden Gate synchronization with source.Identify the Orace Goden Gate sync reated issues and impement the best practices.Identify the performance issues and impement the best practices.User creation and access management.Storage capacity panning to ensure ASM and tabespace were maintained under threshod and avoided repetitive space reated incidents.Create and maintain RAC with Data guard.Administration of RAC database.AWS RDS instance administration, Rehydration, engine upgrade and certificate renewaInstaing, configuring, and maintaining database management systems (DBMS) MySQLMonitoring database performance, avaiabiity, and scaabiity to ensure optima database performanceTroubeshooting database issues and performing reguar database maintenance tasks such as backups, recovery, and repicationOptimizing database design and query performance for efficient data retrieva and storagePerforming database upgrades and migrations as neededDeveoping and impementing database security measures to ensure data confidentiaity, integrity, and avaiabiityDeveoping and enforcing database standards, procedures, and poiciesCreate and maintain detaied documentation of database configurations, procedures, and troubeshooting steps.Respond to aerts and incidents prompty to minimize service disruptions.Coaborate with Leve 1 and Leve 2 support and other teams to address escaated incidents.AWS RDS instance administration and engine upgrade Orace DBA (Primary Ski) MySQL DBA (secondary ski)

Posted 1 week ago

Apply

3.0 - 4.0 years

5 - 6 Lacs

Mumbai

Work from Office

Naukri logo

- Qlik Development : Design, develop, and Deploy QlikView and Qlik Sense applications, reports, and dashboards to meet business requirements. - Data Integration : Collaborate with data engineers to integrate data from multiple sources (databases, APIs, flat files) into Qlik for reporting and analysis. - Performance Tuning : Optimize Qlik applications for performance, focusing on reducing load times and improving user experience. - User Support & Troubleshooting : Provide support for existing Qlik applications, resolving any technical issues and ensuring optimal performance. - Collaborative Problem Solving : Work closely with business stakeholders to gather requirements and transform them into functional and technical specifications. - Best Practices : Ensure that development follows Qlik best practices, including data modeling, data governance, and performance optimization. - Testing & Documentation : Participate in the testi ng process and create comprehensive documentation for developed applications, including usage guides and technical specifications. - Mentorship : Provide guidance and mentorship to junior developers, sharing knowledge on Qlik development best practices and approaches. - Continuous Improvement : Stay up-to-date with the latest features and updates in Qlik technologies and actively contribute to improving the development processes. Required Skills : - Qlik Development : Design, develop, and Deploy QlikView and Qlik Sense applications, re ports, and dashboards to meet business requirements. - Data Integration : Collaborate with data engineers to integrate data from multiple sources (databases

Posted 1 week ago

Apply

8.0 - 15.0 years

30 - 35 Lacs

Pune

Work from Office

Naukri logo

Job Title: Sr. Engineer Data Center Ops and Cloud Architecture Location: Mumbai, Hyderabad Experience: 8-15 Years JOB DESCRIPTION : Provide administration, maintenance, configuration, and ongoing support of server infrastructure hosting Enterprise Applications; resolve complex enterprise system hardware and software problems; collaborate with peer teams and vendors to coordinate solutions Responsible for the management, performance, and security of Oracle and SQL database platforms. Duties include administration, assisting with performance tuning and optimization, backup, and recovery of databases. Recommend actions for fine tuning and capacity planning of existing systems; maintain up-to date knowledge of new hardware and software developments; participate in the evaluation of alternative approaches and new software or modifications to enhance operations and development activities. Play a key role in cloud architecture discussions, centered around hybrid or fully cloud based models. Develop and maintain installation and configuration procedures; interface with business clients and technology staff to implement solutions that meet business needs and addresses impact of new or revised applications on existing infrastructure which includes the installation and testing of new enterprise system software and hardware releases. Participate in generation and/or implementation of new ideas for enhancements, as well as improvements; work directly with users to prioritize requests and frequently update users to make certain that they are informed of progress and estimated delivery timeframe. Coordinate and effectively support escalation communication; participate in the development and maintenance of appropriate project technical documentation including design, configuration, installation, operational procedures, knowledge articles, user reference guides, and project plans. Act as point of escalation for level two support and major incidents, working with vendors to resolve complex technical issues; proactive drive innovation in technical processes and procedures. Required Skills: At least 8 years of experience in a related job area (Infrastructure engineer, System Administrator) Proficiency in Scripting Language (such as Powershell) Strong communication, leadership, and time management skills. Ability to translate business needs into technical solutions Experience with administration and management of Oracle and SQL Database platforms. Experience with Azure cloud infrastructure solutions. Experience with Linux operating systems.

Posted 1 week ago

Apply

10.0 - 15.0 years

40 - 45 Lacs

Pune

Work from Office

Naukri logo

We are seeking a highly skilled and experienced Senior Technical Lead to join our Software Stack team. This role is pivotal in driving the development and integration of LTE Protocol Stack and advanced Open RAN solutions. The ideal candidate will provide strong technical leadership, mentor team members, and collaborate cross-functionally to deliver high-quality, scalable, and innovative software solutions for our Virtual Base Band Unit (vBBU) product. What you need : Bachelor s or a Master s degree in Computer Science, Electrical Engineering, or a related field. 10+ years of experience in telecom software development, with at least 3 years in a technical leadership role. Deep understanding of LTE/LTE-A protocol stack and Open RAN architecture. Experience with virtualization technologies and VRAN deployment models. Strong programming skills in C/C++ and scripting languages (e.g., Python, Bash). Proven experience in cross-functional collaboration and agile development environments. Excellent communication, leadership, and problem-solving skills. Preferred Qualifications: Experience with 5G NR stack and O-RAN specifications. Familiarity with cloud-native technologies (Docker, Kubernetes). Prior experience in customer-facing roles or support engineering. Key Responsibilities: Lead the design, development, and integration of LTE Advanced features and protocol stack optimizations for Open RAN vBBU. Provide technical leadership and mentorship to the software access stack team. Conduct thorough code reviews to ensure code quality, maintainability and performance. Collaborate with cross-functional teams including: FQG and RQG QA teams for feature verification, test case planning, and execution. PHY and Platform teams for seamless integration and performance tuning. PLM and System Architecture Group for aligning product requirements and architectural decisions. Review and contribute to design documents and requirements specifications. Write and maintain technical documentation, including algorithms and system behavior. Support customer deployments and provide expert-level troubleshooting and resolution. Drive continuous improvement in development processes, tools, and methodologies. 0 - 0 a year

Posted 1 week ago

Apply

8.0 - 13.0 years

30 - 35 Lacs

Mumbai, Chennai, Gurugram

Work from Office

Naukri logo

Senior Site Reliability Engineer II (Open) Would you like to be part of a team that delivers high-quality software to our customers? Are you a visible champion with a can do attitude and enthusiasm that inspires others? About the Business LexisNexis Risk Solutions is the essential partner in the assessment of risk. Within our Business Services vertical, we offer a multitude of solutions focused on helping businesses of all sizes drive higher revenue growth, maximize operational efficiencies, and improve customer experience. Our solutions help our customers solve difficult problems in the areas of Anti-Money Laundering/Counter Terrorist Financing, Identity Authentication & Verification, Fraud and Credit Risk mitigation and Customer Data Management. You can learn more about LexisNexis Risk at the link below, https://risk.lexisnexis.com About the Team This Team performs complex research, design, and software development assignments within a software functional area or product line, and provides direct input to project plans, schedules, and methodology in the development of cross-functional software products. About the Role Senior Site Reliability Engineer II will be closely working with SREs and other stakeholder across geographies on their platform requirements using MS Azure. This role is imperative as we migrate from on-premises infrastructure to cloud solutions, which substantially increases operational complexity and workload. This role will focus on automation and optimization reducing manual effort, ensuring platform reliability, and supporting our efforts to better manage and contain increasing cloud costs and the rapidly growing data estate in Azure. Proficient in Azure-based platform operations, monitoring, and incident management with expertise in tools like Azure Monitor, ADF, Synapse, Databricks, CI/CD, and strong problem-solving and communication skills. The candidate should also be willing to work in on-call rotations or provide after-hours support if needed. Responsibilities: Daily monitoring, incident response, and performance tuning Automation and optimization to reduce manual effort Ensuring platform reliability Supporting efforts to manage and contain increasing cloud costs Managing the rapidly growing data estate in Azure Willingness to work in on-call rotations or provide after-hours support if needed Requirements: Should have total 8+ years of experience and 4+ years of relevant experience. Certification on AZ-900, AZ-104 Azure Platform Operations & Monitoring. Hands-on experience with Azure Monitor, Log Analytics, and Application Insights Familiarity with Azure Data Factory, Synapse, and Databricks. Azure Storage, Key Vault, and Role-Based Access Control (RBAC) Knowledge of SDLC process like requirement gathering, design, implementation (Coding), testing, deployment, and maintenance. Proficient in requirement gathering and documentation , m onitoring & Incident Management . Proficiency in setting up alerts, dashboards, and automated responses Experience with uptime/health monitoring and SLA enforcement Ability to triage platform issues and coordinate with engineering teams . Comfortable with log review, root cause analysis, and performance tuning Strong problem-solving and communication skills . Excellent communication skills - verbal and written. Desirable Skills: Github Actions Unity Catalog Familiarity with ITIL or operational runbooks is a good to have. Learn more about the LexisNexis Risk team and how we work We are committed to providing a fair and accessible hiring process. If you have a disability or other need that requires accommodation or adjustment, please let us know by completing our Applicant Request Support Form or please contact 1-855-833-5120. Criminals may pose as recruiters asking for money or personal information. We never request money or banking details from job applicants. Learn more about spotting and avoiding scams here . Please read our Candidate Privacy Policy . We are an equal opportunity employer: qualified applicants are considered for and treated during employment without regard to race, color, creed, religion, sex, national origin, citizenship status, disability status, protected veteran status, age, marital status, sexual orientation, gender identity, genetic information, or any other characteristic protected by law. USA Job Seekers: EEO Know Your Rights .

Posted 1 week ago

Apply

4.0 - 8.0 years

12 - 17 Lacs

Mumbai, Chennai, Gurugram

Work from Office

Naukri logo

Senior Site Reliability Engineer II (Open) Would you like to be part of a team that delivers high-quality software to our customers Are you a visible champion with a can do attitude and enthusiasm that inspires others About the Business LexisNexis Risk Solutions is the essential partner in the assessment of risk. Within our Business Services vertical, we offer a multitude of solutions focused on helping businesses of all sizes drive higher revenue growth, maximize operational efficiencies, and improve customer experience. Our solutions help our customers solve difficult problems in the areas of Anti-Money Laundering/Counter Terrorist Financing, Identity Authentication & Verification, Fraud and Credit Risk mitigation and Customer Data Management. You can learn more about LexisNexis Risk at the link below, https//risk.lexisnexis.com About the Team This Team performs complex research, design, and software development assignments within a software functional area or product line, and provides direct input to project plans, schedules, and methodology in the development of cross-functional software products. About the Role Senior Site Reliability Engineer II will be closely working with SREs and other stakeholder across geographies on their platform requirements using MS Azure. This role is imperative as we migrate from on-premises infrastructure to cloud solutions, which substantially increases operational complexity and workload. This role will focus on automation and optimization reducing manual effort, ensuring platform reliability, and supporting our efforts to better manage and contain increasing cloud costs and the rapidly growing data estate in Azure. Proficient in Azure-based platform operations, monitoring, and incident management with expertise in tools like Azure Monitor, ADF, Synapse, Databricks, CI/CD, and strong problem-solving and communication skills. The candidate should also be willing to work in on-call rotations or provide after-hours support if needed. Responsibilities Daily monitoring, incident response, and performance tuning Automation and optimization to reduce manual effort Ensuring platform reliability Supporting efforts to manage and contain increasing cloud costs Managing the rapidly growing data estate in Azure Willingness to work in on-call rotations or provide after-hours support if needed Requirements Should have total 8+ years of experience and 4+ years of relevant experience. Certification on AZ-900, AZ-104 Azure Platform Operations & Monitoring. Hands-on experience with Azure Monitor, Log Analytics, and Application Insights Familiarity with Azure Data Factory, Synapse, and Databricks. Azure Storage, Key Vault, and Role-Based Access Control (RBAC) Knowledge of SDLC process like requirement gathering, design, implementation (Coding), testing, deployment, and maintenance. Proficient in requirement gathering and documentation , m onitoring & Incident Management . Proficiency in setting up alerts, dashboards, and automated responses Experience with uptime/health monitoring and SLA enforcement Ability to triage platform issues and coordinate with engineering teams . Comfortable with log review, root cause analysis, and performance tuning Strong problem-solving and communication skills . Excellent communication skills - verbal and written. Desirable Skills Github Actions Unity Catalog Familiarity with ITIL or operational runbooks is a good to have. Learn more about the LexisNexis Risk team and how we work We are committed to providing a fair and accessible hiring process. If you have a disability or other need that requires accommodation or adjustment, please let us know by completing our Applicant Request Support Form or please contact 1-855-833-5120. Criminals may pose as recruiters asking for money or personal information. We never request money or banking details from job applicants. Learn more about spotting and avoiding scams here . Please read our Candidate Privacy Policy . We are an equal opportunity employer qualified applicants are considered for and treated during employment without regard to race, color, creed, religion, sex, national origin, citizenship status, disability status, protected veteran status, age, marital status, sexual orientation, gender identity, genetic information, or any other characteristic protected by law. USA Job Seekers EEO Know Your Rights .

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies