Home
Jobs

4124 Logging Jobs - Page 42

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Description: DevOps Engineer (Onsite – Mumbai) Location: Onsite – Mumbai, India Experience: 3+ years About the Role: We are looking for a skilled and proactive DevOps Engineer with 3+ years of hands-on experience to join our engineering team onsite in Mumbai . The ideal candidate will have a strong background in CI/CD pipelines , cloud platforms (AWS, Azure, or GCP), infrastructure as code , and containerization technologies like Docker and Kubernetes. This role involves working closely with development, QA, and operations teams to automate, optimize, and scale our infrastructure. Key Responsibilities: Design, implement, and maintain CI/CD pipelines for efficient and reliable deployment processes Manage and monitor cloud infrastructure (preferably AWS, Azure, or GCP) Build and manage Docker containers , and orchestrate with Kubernetes or similar tools Implement and manage Infrastructure as Code using tools like Terraform , CloudFormation , or Ansible Automate configuration management and system provisioning tasks Monitor system health and performance using tools like Prometheus , Grafana , ELK , etc. Ensure system security through best practices and proactive monitoring Collaborate with developers to ensure smooth integration and deployment Must-Have Skills: 3+ years of DevOps or SRE experience in a production environment Experience with cloud services (AWS, GCP, Azure) Strong knowledge of CI/CD tools such as Jenkins, GitLab CI, CircleCI, or similar Proficiency with Docker and container orchestration (Kubernetes preferred) Hands-on with Terraform , Ansible , or other infrastructure-as-code tools Good understanding of Linux/Unix system administration Familiar with version control systems (Git) and branching strategies Knowledge of scripting languages (Bash, Python, or Go) Good-to-Have (Optional): Exposure to monitoring/logging stacks: ELK, Prometheus, Grafana Experience in securing cloud environments Knowledge of Agile and DevOps culture Understanding of microservices and service mesh tools (Istio, Linkerd) Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Description - External Software Engineer in Public Cloud Platform Engineering (AWS, GCP, Azure) and work with top engineering talent to build innovative cloud platforms for healthcare applications. This role offers a unique opportunity to engage in analysis, design, coding, engineering, testing, debugging, and more. At our company, every position prioritizes quality in every output. Be part of a team that's transforming healthcare delivery through cutting-edge technology. •Building and operating secure cloud platform capabilities that meet business requirements •Innovate to improving efficiency, reducing technical drag, and create usable app patterns •Practice SRE principles, to eliminating repetitive tasks, monitor performance, simplifying work practices, defining outcomes and metrics, and assure operational quality •Manage security controls at the platform layer to enable the organization to operate securely, efficiently, and within policy •Assist multiple partner teams to understand and apply information security guidance and standards with the goal of mitigating information security risks AI Expectations: Will utilize AI tools and applications to perform tasks, make decisions and enhance their work and overall cloud migration velocity. Expected to use AI Powered software for data analysis, coding, and overall productivity. Qualifications - External - Undergraduate degree or equivalent experience. •5+ years of experience as software development engineer or equivalent hands on experience producing code for production systems •2+ years of experience in Public Cloud such as AWS, Azure, GCP beyond basic IaaS functionality •2+ years of experience programming in at least one high-level language (Python, Golang, JavaScript, etc.) •2+ years of experience with the software build cycle •1+ years of engineering experience in building infrastructure using code and repeatable designs •1+ years of experience in automation of CI/CD using GitHub actions or similar and source control system such as Git •1+ years of experience with Agile/lean development practice •1+ years of experience with Terraform Preferred Qualifications: •Experience with containers and orchestration platforms such as Kubernetes •Algorithms, data structures, OO design and other CSCI concepts •API design and lifecycle management (REST, etc.) •Application foundations and frameworks (Spring, Flask, etc.) •Data storage, caching and optimization (NoSQL databases, Redis, PostgreSQL, etc.) •Inter-service messaging and streams discovery (SQS,Pub/Sub Kafka, etc.) •Instrumentation, logging and tracing (Prometheus, CloudWatch, Stack Driver, Azure Monitor etc) •Evidence based approach to making decisions and solving problems •Demonstrated design mindset - capable of building distributed/scalable services Show more Show less

Posted 6 days ago

Apply

5.0 - 7.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Roles and Responsibilities The list of brief responsibilities required for the job, but not limited to the following: Participate in the inquiry at the pre-sales stage and help choose the right hardware and software required matching customers requirements Preparation of all the engineering drawings and documentation necessary for the project Development and Testing of PLC Logic & SCADA/ HMI according to clients requirement by studying BOM, I/O List, P&ID, Logic and Control Philosophy, Process Flow Diagram, Loop Drawings, Interlocks List and Critical Parameters Develop SCADA graphics with all advanced facilities like Alarm Configuration, Instrument and Process Faceplates, Data Logging, Live data Trends and Historical Trends, Batch and Periodic Report Generation, Trend Templates, System Configuration, Recipe files, and Local Messages Conduct F.A.T (Factory Acceptance Test) after completion of panel manufacturing Prepare Technical documentations like Annotation, S.O.P, Operating manuals for System and Loop drawings Participate in the commissioning and SAT (Site Acceptance Test) at the customer’s site Give Hands-on training on PLC and SCADA/HMI to clients after project completion, if needed Desired Candidate Profile Minimum 5-7 Years of Experience in PLC, HMI, SCADA Programming Experience on Rockwell platforms is strongly recommended Should be able to understand and troubleshoot panel wiring for Motor Starters such as DOL, Star-Delta, VFD, Soft Starters etc Should be able to understand the wiring of field instruments such as Temperature, Pressure, Level and Flow Transmitters and Switches Knowledge on PlantPAx systems, Batch Programming (as per ISA 88), SIS (process functional safety SIL2 and SIL3 systems) and Industry 4.0 solutions are highly recommended Show more Show less

Posted 6 days ago

Apply

1.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Linkedin logo

Brief about the Position: Preferred candidate from B. Tech or B.E background or Data Analytics background with 1 years of experience in a large corporate environment ideally supporting training and development initiatives. The candidate should be able to churn the data from different tools and contribute to L&D metrics development and reporting, assist in Jira monitoring, and provide management with new ideas for based on the data analysis. Primary Job Responsibilities Experience in eLearning tool management software preferred PARADISO. Required to provide training on the process of logging in to CAST university until accessing courses in LMS. Responsible for building out learning plans & track team progress against ongoing learning and readiness plans. Ability to interface with salesperson to manage LMS requests for Global customers. Regular reporting on learning activity and learning impact for senior stakeholders. Manage the onboarding and role training creation for the new joiners. Own the online training sections of our Learning Management System (LMS) Timesheet Management – Provide training on the process of closing time sheets for all customer and investment projects. JIRA – Able to provide training on the workflow of JIRA when required. Excellent knowledge of MS Office, including Word, Excel (macros), and PowerPoint. Contribute to L&D metrics development and reporting, including evaluation of the impact of L&D initiatives Proven time management skills, follow through, and ability to deliver on multiple tasks. Skillset and Qualifications: Bachelor’s Degree in engineering discipline. 1 years of experience in a large corporate environment ideally supporting training initiatives. Minimum 70% consistent marks 10th, +2 and Degree Excellent communication skills (oral and written) including an ability to communicate effectively at senior levels within internal and client organizations Candidate needs to be a self-starter in this role. Show more Show less

Posted 6 days ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Job Title: Software Engineer (Java) Experience: 2–4 Years Location: Ahmedabad Position Overview: We are in search of a talented and motivated Software Engineer, proficient in Java. This role presents an exciting opportunity for an individual passionate about software development to contribute to impactful projects, collaborate with experienced professionals, and enhance their skills in a dynamic work environment. Role & Responsibility: • Collaborate with Tech Leads to design, develop, and maintain software products using Java. • Apply foundational principles of Object-Oriented Programming (OOPs) in Java to deliver efficient and scalable solutions. • Assist in implementing Java fundamentals such as concurrency, logging, and error handling to contribute to robust codebases. • Follow industry-standard practices throughout the software development lifecycle, ensuring the delivery of high-quality software solutions. • Participate in debugging activities to identify and resolve issues in multithreaded applications promptly. • Utilize Java and Vert.x framework for event-driven application development. • Implement message queue architecture using Kafka for efficient data processing and communication. • Demonstrate proficiency in data structures and algorithms to optimize software performance. • Engage actively within Agile or similar product methodologies to deliver innovative solutions in a timely manner. • Utilize version control systems like Git and build systems such as Jenkins, Maven, or similar tools. • Assist in developing Microservices and gain exposure to AWS services like EC2, Lambda, S3, and CloudWatch. • Collaborate with the team to understand API management platforms, design standards, and best practices. Skills and Qualifications: • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. • 2-4 years of hands-on experience in software design and development using Java. • Proficiency in applying OOPs principles and basic design patterns in Java. • Familiarity with Java fundamentals including concurrency, logging, and error handling is a plus. • Basic understanding of debugging techniques in multithreading applications. • Knowledge of data structures and algorithms. • Exposure to Agile or similar product development methodologies. • Familiarity with version control systems (e.g., Git) and build systems (e.g., Jenkins, Maven). • Experience with Kafka for message queue architecture. • Familiarity with Vert.x framework for event-driven application development is advantageous. • Experience with AWS services and Microservices architecture is advantageous. • Strong problem-solving skills and attention to detail. • Effective communication and collaboration abilities. • Ability to thrive in a fast-paced and collaborative environment. Show more Show less

Posted 6 days ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Greeting from Infosys BPM Ltd, Exclusive Women's Walkin drive We are hiring for MIM, Networking, Content and Technical writer, ITIL skills. Please walk-in for interview on 13th June 2025 at Pune location Note: Please carry copy of this email to the venue and make sure you register your application before attending the walk-in. Please use below link to apply and register your application. Please mention Candidate ID on top of the Resume *** https://career.infosys.com/jobdesc?jobReferenceCode=PROGEN-HRODIRECT-215163 Interview details Interview Date: 13th June 2025 Interview Time: 10 AM till 1 PM Interview Venue: Pune:: Hinjewadi Phase 1 Infosys BPM Limited, Plot No. 1, Building B1, Ground floor, Hinjewadi Rajiv Gandhi Infotech Park, Hinjewadi Phase 1, Pune, Maharashtra-411057 Please find below Job Description for your reference: Work from Office*** Min 2 years of experience on project is mandate*** Job Description: MIM Strong knowledge of IT service management including ITIL Responding to a reported incident, identifying the cause, and initiating the incident management process. Participate in root cause analysis meetings, gathering lessons learned and managing and implement continuous improvement processes. Ensuring Client SLAs / KPIs and Customer satisfaction expectations are achieved. Restore a failed IT Service as quickly as possible. Job Description: Networking Hands on experience in Switching & Routing is must. Network engineer is responsible for day-to-day Delivery & operation work organization network. Should manage LAN, WAN, Firewall, monitoring tools Strong understanding of routing protocols like BGP, OSPF, EIGRP. Strong understanding of various switching protocols like STP, RSTP, VLAN, HSRP, VRRP, link-aggregation. Strong understanding of various tunnel like GRE, IPSEC, NAT & ACL. Knowledge on virtual-router, routing-instances, IPSec Tunnels. Skilled in creating High-Level Design (HLD) of network diagram in Visio or draw.io Experience in firewall (Juniper, Fortinet, CISCO ASA) Knowledge on Open VPN Knowledge on monitoring tools. Job Description: Content and Technical writer Develop high-quality technical documents, including user manuals, guides, and release notes. Collaborate with cross-functional teams to gather requirements and create accurate documentation. Conduct functional testing and manual testing to ensure compliance with FDA regulations. Ensure adherence to ISO standards and maintain a clean, organized document management system. Strong understanding of Infra domain Technical writer that can convert complex technical concepts into easy to consume documents for the targeted audience. In addition, will also be a mentor to the team with technical writing. Job Description: ITIL Overseeing the incident management process and team members involved in resolving the incident. Responding to a reported service incident, identifying the cause, and initiating the incident management process. Collaborating with the incident management team to ensure that all protocols are diligently followed. Logging all incidents and their resolution to see if there are recurring malfunctions. REGISTRATION PROCESS: The Candidate ID & SHL Test(AMCAT ID) is mandatory to attend the interview. Please follow the below instructions to successfully complete the registration. (Talents without registration & assessment will not be allowed for the Interview). Candidate ID Registration process: STEP 1: Visit: https://career.infosys.com/joblist STEP 2: Click on "Register" and provide the required details and submit. STEP 3: Once submitted, Your Candidate ID(100XXXXXXXX) will be generated. STEP 4: The candidate ID will be shared to the registered Email ID. SHL Test(AMCAT ID) Registration process: This assessment is proctored, and talent gets evaluated on Basic analytics, English Comprehension and writex (email writing). STEP 1: Visit: https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fautologin-talentcentral.shl.com%2F%3Flink%3Dhttps%3A%2F%2Famcatglobal.aspiringminds.com%2F%3Fdata%3DJTdCJTIybG9naW4lMjIlM0ElN0IlMjJsYW5ndWFnZSUyMiUzQSUyMmVuLVVTJTIyJTJDJTIyaXNBdXRvbG9naW4lMjIlM0ExJTJDJTIycGFydG5lcklkJTIyJTNBJTIyNDE4MjQlMjIlMkMlMjJhdXRoa2V5JTIyJTNBJTIyWm1abFpUazFPV1JsTnpJeU1HVTFObU5qWWpRNU5HWTFOVEU1Wm1JeE16TSUzRCUyMiUyQyUyMnVzZXJuYW1lJTIyJTNBJTIydXNlcm5hbWVfc3E5QmgxSWI5NEVmQkkzN2UlMjIlMkMlMjJwYXNzd29yZCUyMiUzQSUyMnBhc3N3b3JkJTIyJTJDJTIycmV0dXJuVXJsJTIyJTNBJTIyJTIyJTdEJTJDJTIycmVnaW9uJTIyJTNBJTIyVVMlMjIlN0Q%3D%26apn%3Dcom.shl.talentcentral%26ibi%3Dcom.shl.talentcentral%26isi%3D1551117793%26efr%3D1&data=05%7C02%7Comar.muqtar%40infosys.com%7Ca7ffe71a4fe4404f3dac08dca01c0bb3%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C638561289526257677%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=s28G3ArC9nR5S7J4j%2FV1ZujEnmYCbysbYke41r5svPw%3D&reserved=0 STEP 2: Click on "Start new test" and follow the instructions to complete the assessment. STEP 3: Once completed, please make a note of the AMCAT ID( Access you Amcat id by clicking 3 dots on top right corner of screen). NOTE: During registration, you'll be asked to provide the following information: Personal Details: Name, Email Address, Mobile Number, PAN number. Availability: Acknowledgement of work schedule preferences (Shifts, Work from Office, Rotational Weekends, 24/7 availability, Transport Boundary) and reason for career change. Employment Details: Current notice period and total annual compensation (CTC) in the format 390000 - 4 LPA (example). Candidate Information: 10-digit candidate ID starting with 100XXXXXXX, Gender, Source (e.g., Vendor name, Naukri/LinkedIn/Found it, or Direct), and Location Interview Mode: Walk-in Attempt all questions in the SHL Assessment app. The assessment is proctored, so choose a quiet environment. Use a headset or Bluetooth headphones for clear communication. A passing score is required for further interview rounds. 5 or above toggles, multi face detected, face not detected, or any malpractice will be considered rejected Once you've finished, submit the assessment and make a note of the AMCAT ID (15 Digit) used for the assessment. Documents to Carry: Please have a note of Candidate ID & AMCAT ID along with registered Email ID. Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Please carry 2 set of updated Resume/CV (Hard Copy). Please carry original ID proof for security clearance. Please carry individual headphone/Bluetooth for the interview. Pointers to note: Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Original Government ID card is must for Security Clearance. Regards, Infosys BPM Recruitment team. Show more Show less

Posted 6 days ago

Apply

8.0 - 12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

About Company Our client is a trusted global innovator of IT and business services. We help clients transform through consulting, industry solutions, business process services, digital & IT modernization and managed services. Our client enables them, as well as society, to move confidently into the digital future. We are committed to our clients’ long-term success and combine global reach with local client attention to serve them in over 50 countries around the globe Job Title: Collapse Details Lead /Senior Tableau Admin with AWS Location: Noida Experience: 8 to 12 years Job Type : Contract to hire Notice Period:- Immediate joiner Mandatory Skills Description: Qualifications: - Lead /Senior Tableau Admin with AWS Exp - 8 to 12 years - Proven experience as a Tableau Administrator, with strong skills in Tableau Server and Tableau Desktop. - Experience with AWS, particularly with services relevant to hosting and managing Tableau Server (e.g., EC2, S3, RDS). - Familiarity with SQL and experience working with various databases. - Knowledge of data integration, ETL processes, and data warehousing principles. - Strong problem-solving skills and the ability to work in a fast-paced environment. - Excellent communication and collaboration skills. - Relevant certifications in Tableau and AWS are a plus. A Tableau Administrator, also known as a Tableau Server Administrator, is responsible for managing and maintaining Tableau Server, a platform that enables organizations to create, share, and collaborate on data visualizations and dashboards. Here's a typical job description for a Tableau Admin: 1. Server Administration: Install, configure, and maintain Tableau Server to ensure its reliability, performance, and security. 2. User Management: Manage user accounts, roles, and permissions on Tableau Server, ensuring appropriate access control. 3. Security: Implement security measures, including authentication, encryption, and access controls, to protect sensitive data and dashboards. 4. Data Source Connections: Set up and manage connections to various data sources, databases, and data warehouses for data extraction. 5. License Management: Monitor Tableau licensing, allocate licenses as needed, and ensure compliance with licensing agreements. 6. Backup and Recovery: Establish backup and disaster recovery plans to safeguard Tableau Server data and configurations. 7. Performance Optimization: Monitor server performance, identify bottlenecks, and optimize configurations to ensure smooth dashboard loading and efficient data processing. 8. Scaling: Scale Tableau Server resources to accommodate increasing user demand and data volume. 9. Troubleshooting: Diagnose and resolve issues related to Tableau Server, data sources, and dashboards. 10. Version Upgrades: Plan and execute server upgrades, apply patches, and stay current with Tableau releases. 11. Monitoring and Logging: Set up monitoring tools and logs to track server health, user activity, and performance metrics. 12. Training and Support: Provide training and support to Tableau users, helping them with dashboard development and troubleshooting. 13. Collaboration: Collaborate with data analysts, data scientists, and business users to understand their requirements and assist with dashboard development. 14. Documentation: Maintain documentation for server configurations, procedures, and best practices. 15. Governance: Implement data governance policies and practices to maintain data quality and consistency across Tableau dashboards. 16. Integration: Collaborate with IT teams to integrate Tableau with other data management systems and tools. 17. Usage Analytics: Generate reports and insights on Tableau usage and adoption to inform decision-making. 18. Stay Current: Keep up-to-date with Tableau updates, new features, and best practices in server administration. A Tableau Administrator plays a vital role in ensuring that Tableau is effectively utilized within an organization, allowing users to harness the power of data visualization and analytics for informed decision-making. Qualifications Bachelor's degree in Computer Science (or related field) Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Role: Python + microservices Experience range: 8-10 years Location: Current location must be Bangalore NOTE: Candidate interested for Walk-in drive in Bangalore must apply Job description: Preferred Qualifications: Experience with cloud platforms is a plus. Familiarity with Python frameworks (Flask, FastAPI, Django). Understanding of DevOps practices and tools (Terraform, Jenkins). Knowledge of monitoring and logging tools (Prometheus, Grafana, Stackdriver). Requirements: Proven experience as a Python developer, specifically in developing microservices. Strong understanding of containerization and orchestration (Docker, Kubernetes). Experience with Google Cloud Platform, specifically Cloud Run, Cloud Functions, and other related services. Familiarity with RESTful APIs and microservices architecture. Knowledge of database technologies (SQL and NoSQL) and data modelling. Proficiency in version control systems (Git). Experience with CI/CD tools and practices. Strong problem-solving skills and the ability to work independently and collaboratively. Excellent communication skills, both verbal and written. Show more Show less

Posted 6 days ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Job Information Date Opened 06/11/2025 Job Type Full time Industry IT Services City Bangalore , Hyderabad State/Province Karnataka Country India Zip/Postal Code 560001 Job Description Team is looking for a Senior DevOps Engineer with deep expertise in building and managing Infrastructure as Code (IaC) on AWS using Terraform and Terragrunt. You will play a key role in architecting scalable, secure, and highly available cloud infrastructure to support our engineering teams and production environments. Design, develop, and manage scalable cloud infrastructure on AWS using Terraform and Terragrunt. 6+ years of experience in DevOps, Cloud Engineering, or Infrastructure Engineering. Experience with multi-account AWS setups and account governance (e.g., AWS Organizations, Control Tower). This is a hands-on role that involves collaborating with developers, architects, and operations teams to automate infrastructure provisioning, optimize cloud resources, and enforce DevOps best practices. Create and maintain reusable, modular, and version-controlled IaC modules. Strong expertise in AWS, including services like EC2, VPC, S3, RDS, IAM, ECS, Lambda, CloudWatch, etc. Knowledge of infrastructure testing frameworks (e.g., Terratest, Checkov, or InSpec). Implement and enforce infrastructure standards, security best practices, and compliance policies. Proven experience developing infrastructure using Terraform and Terragrunt in production environments. Exposure to containerization and orchestration (Docker, ECS, EKS, Kubernetes). Build and manage CI/CD pipelines to automate infrastructure provisioning and deployment. Solid understanding of infrastructure design patterns, networking, and cloud security. Familiarity with configuration management tools (Ansible, Chef, Puppet). Collaborate with engineering teams to ensure seamless integration between infrastructure and applications. Experience with CI/CD tools such as GitHub Actions, GitLab CI, CircleCI, or Jenkins. Understanding of cost optimization and cloud cost analysis tools. Monitor, troubleshoot, and optimize cloud environments for cost, performance, and reliability. Proficient in scripting languages like Bash, Python, or Go for automation tasks. Provide guidance on DevOps best practices and mentor junior team members. Familiarity with version control (Git), monitoring, and logging tools. Stay current with AWS service updates and evolving DevOps tooling.

Posted 6 days ago

Apply

0.0 years

0 Lacs

Delhi

On-site

Indeed logo

Job requisition ID :: 82378 Date: Jun 11, 2025 Location: Delhi Designation: Assistant Manager Entity: Your potential, unleashed. India’s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realize your potential amongst cutting edge leaders, and organisations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. Skills Required Must have good technical knowledge on different versions of SharePoint (2013/2016) Installing and Configuring a SharePoint Environment Configure SharePoint farms, Alternate Access Mappings (AAM), Zones and Authentication SharePoint patch management Configure Service Applications (BDC, MMS, UPS, Search etc) Configuring logging, quotas, monitoring levels, health reports, security Managing user accounts, managed accounts and service accounts Monitor and analyse SharePoint environment, generating health, administrative and web analytics reports Identifying and resolving health and performance issues Good understanding of various identity providers such as Azure AD, and configuring using custom claim provider Custom solution deployment on SharePoint Server Installing and Configuring Workflow Manager Installing and Configuring Office Web App Server/Office Online Server Knowledge on Load balancers is added advantage Operating Systems Concepts - Active Directory, Security, Performance Networking concepts - DNS, Protocols, Devices IIS Concepts - Configuration, Architecture, SSL Windows Architecture – Event monitoring, Memory Management concepts .NET basic concepts, Web architecture SQL Server Configuration and Administration – Cluster and Always ON Tools - Netmon, Perfmon, SQL Profiler, Fiddler Advanced troubleshooting skills Experience on Workflow, SharePoint Designer, PowerShell Scripting is added advantage. How you’ll grow Connect for impact Our exceptional team of professionals across the globe are solving some of the world’s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report. Empower to lead You can be a leader irrespective of your career level. Our colleagues are characterised by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership. Inclusion for all At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters. Drive your career At Deloitte, you are encouraged to take ownership of your career. We recognise there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte. Everyone’s welcome… entrust your happiness to us Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here’s a glimpse of things that are in store for you. Interview tips We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organisation and the business area you’re applying to. Check out recruiting tips from Deloitte professionals.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra

On-site

Indeed logo

Our software engineers at Fiserv bring an open and creative mindset to a global team developing mobile applications, user interfaces and much more to deliver industry-leading financial services technologies to our clients. Our talented technology team members solve challenging problems quickly and with quality. We're seeking individuals who can create frameworks, leverage developer tools, and mentor and guide other members of the team. Collaboration is key and whether you are an expert in a legacy software system or are fluent in a variety of coding languages you're sure to find an opportunity as a software engineer that will challenge you to perform exceptionally and deliver excellence for our clients. Full-time Entry, Mid, Senior Yes (occasional), Minimal (if any) Responsibilities Requisition ID R-10362939 Date posted 06/11/2025 End Date 06/30/2025 City Pune State/Region Maharashtra Country India Location Type Onsite Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Sr Associate, Application Support About your role: At Fiserv, we are committed to providing exceptional service and support to our clients. As an Application Support - Sr Associate II, you will be part of a dedicated team ensuring the smooth operation and maintenance of critical business applications. This role involves diagnosing and resolving technical issues, providing guidance to end-users, and collaborating with various teams to improve application performance and reliability. What you'll do: Provide support for business applications, ensuring maximum uptime and performance. Troubleshoot and resolve application issues, collaborating with development teams as needed. Monitor application performance and recommend improvements to enhance efficiency. Document support activities, maintain detailed logs, and develop user guides. Responsibilities listed are not intended to be all-inclusive and may be modified as necessary. Experience you'll need to have: 5+ years of experience in application support 5+ year(s) of experience in JAVA/J2EE and spring boot Should have experience in troubleshooting and resolving technical issues Should have working knowledge in oracle and SQL Should have knowledge in GIT and Shell script Experience that would be great to have: Familiarity with monitoring and logging tools Familiarity with REACT Experience with automation and scripting languages Thank you for considering employment with Fiserv. Please: Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our commitment to Diversity and Inclusion: Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note to agencies: Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning about fake job posts: Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra

On-site

Indeed logo

Job details Employment Type: Full-Time Location: Pune, Maharashtra, India Job Category: Engineering Job Number: WD30243623 Job Description Job Title-Senior Software Engineer (AI Engineering) Posting Title-Senior Software Engineer (Full Stack - AI & Data Engineering) Job Code/Job Profile/Job Level-172 Preferred Locations- India (Pune) Introduction The future is being built today and Johnson Controls is making that future more productive, more secure and more sustainable. We are harnessing the power of cloud, AI/ML and Data analytics, the Internet of Things (IoT), and user design thinking to deliver on the promise of intelligent buildings and smart cities that connect communities in ways that make people’s lives and the world better. As the AI landscape expands to GenAI, we must continuously evolve and pivot to capitalize on these advancements and bring them through the maturity cycle to benefit our teams and our customers. What you will do The Johnson Controls Data Strategy & AI’s mission is to infuse AI capabilities into products using a collaborative approach working alongside multiple business units. One of the charters of the hub is to create end-to end enablers to streamline AI/ML operations starting with Data supply strategy to Data discovery to Model training and development to deployment of AI services in the Cloud as well as at the Edge. The AI Hub team is looking to accelerate the creation of tools, services and workflows to aid in the quick and widespread deployment of AI Services on a global scale. We are looking for a hands-on Senior Software Engineer with industry experience to contribute to foundational AI/ML engineering with repeatability in mind. The Senior Engineer will work with data scientists, platform/data architects and domain experts from teams across JCI and build enablers for help in productizing AI/ML models. AI Engineering: Use sound and widely accepted software engineering principles to deliver high-quality software that forms the foundation of our end-to-end AI/ML solutions that make the buildings smarter. Translating requirements: Translate AI/ML customer features to a set of AI Engineering requirements and deliver them in high-quality, well-thought-out, cohesive responses. Generative AI : Use GenAI frameworks in designing, developing, and implementing LLMOps for cutting-edge generative artificial intelligence models considering cost, latency, multi-cloud support. How you will do it Be part of a highly performant technical team consisting of backend, MLOps, LLMOps, DevOps engineers and architects to bring workflows to life that aid in the development and widespread deployment of AI services in the cloud and at the edge Work with Product and Data Science teams, understand and translate requirements to well-designed modular components accounting for the variability in data sources and deployment targets Help evaluate vendors, open source and proprietary technologies and present recommendations to onboard potential partners, automate machine learning workflows, model training and versioned experimentation, digital feedback and monitoring What we look for Required BS in Computer Science/Electrical or Computer Engineering, or has a degree and demonstrated technical abilities in similar areas 5+ years of experience as a Software Engineer in any of the following fields: Cloud Services, IoT 2 + years of experience in UI, Angular 16+, JavaScript, HTML, CSS 3+ years of programming and object-oriented design experience in any of the modern languages such as Python, Nodejs 2 + years of experience with TDD (Test Driven Development) methodology, ensuring building high quality software at pace Experience with building backend services on AWS, Azure or GCP API-first design experience accounting for security, authentication/authorization, logging, and usage patterns Experience working with message brokers, caches, queues, pub/sub concepts Container experience using technologies such as Kubernetes, Docker, AKS Knowledgeable in the SCRUM/Agile development methodology Strong spoken and written communication skills Preferred Qualifications MS in Computer Science/Electrical or Computer Engineering 5+ years of experience as a Software Engineer in any of the following fields: Finance, Cloud Services, IoT 1+ years of experience working alongside Data Scientists to productize AI/ML Models Experience working on developing LLMops solution for generative AI by using standard frameworks like LangChain, LlamaIndex etc. Experience working on conversational Agents using Azure AI Services or similar services for any cloud provider

Posted 6 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Tesco India • Bengaluru, Karnataka, India • Full-Time • Permanent • Apply by 13-Jun-2025 About the role Systems Engineer III - Performance Engineer What is in it for you At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles -simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. You will be responsible for Collaborate with product managers and developers to understand product requirements and contribute to performance-focused design discussions. Create and maintain comprehensive non-functional test cases and use cases tailored to performance testing needs. Translate NFRs into detailed performance and security test plans, including SLAs, SLOs, and capacity benchmarks. Develop detailed performance test plans, including test cases and test data, and ensure alignment with business expectations. Execute various types of performance testing such as load, stress, scalability, and endurance tests to assess system behaviour under different conditions. Analyse performance test results to identify bottlenecks and inefficiencies and provide actionable insights for resolution. Monitor system performance using diagnostic tools and provide real-time feedback during testing cycles. Automate performance tests using modern, open-source tools and scripting languages to streamline testing processes. Collaborate with DevSecOps to integrate security testing into CI/CD workflows and enforce shift-left security practices. Document and report security vulnerabilities with risk ratings, reproduction steps, and remediation guidance. Work closely with QE, DevOps, and Development teams to ensure performance and security best practices are embedded throughout the SDLC. Provide detailed test reports, dashboards, and technical documentation for stakeholders. You will need Bachelor’s degree in computer science or a related engineering discipline. 12+ years of experience in Quality Engineering preferably in retail orgs or product organisations Application Testing: Hands-on experience in performance testing of APIs, microservices, web applications, and native mobile apps. Performance Testing Tools: Proficient in industry-standard tools such as JMeter, K6, Locust, Gatling, etc for load and stress testing. Scripting & Automation: Strong programming skills in Java, Python, and Shell scripting for developing and automating performance test scripts. Monitoring & Diagnostics: Expertise in using APM and logging tools - AppDynamics, Dynatrace, Splunk, New Relic, RunScope, Grafana to monitor & analyze system performance. Cloud & Containerization: Solid understanding of cloud platforms (Azure), container orchestration (Kubernetes), and containerization (Docker) for scalable performance testing. Database Performance: Ability to analyze and optimize SQL queries and database performance; familiarity with SQL, NoSQL databases, and pub-sub messaging systems. Infrastructure Knowledge: Understanding of load balancers, infrastructure design, and application architecture in both Azure cloud and on-premises environments. Security Tools: Experience with security and vulnerability assessment tools such as Burp Suite, OWASP ZAP, Metasploit, Nessus, and Nmap. Security Best Practices: Strong grasp of OWASP Top 10, CWE/SANS Top 25, and secure coding principles. Operating Systems: Comfortable working in Linux/Unix environments. Analytical Skills: Excellent problem-solving, debugging, and troubleshooting abilities. Communication: Strong verbal and written communication skills, with the ability to convey complex technical concepts clearly. About us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues Tesco Technology Today, our Technology team consists of over 5,000 experts spread across the UK, Poland, Hungary, the Czech Republic, and India. In India, our Technology division includes teams dedicated to Engineering, Product, Programme, Service Desk and Operations, Systems Engineering, Security & Capability, Data Science, and other roles. At Tesco, our retail platform comprises a wide array of capabilities, value propositions, and products, essential for crafting exceptional retail experiences for our customers and colleagues across all channels and markets. This platform encompasses all aspects of our operations – from identifying and authenticating customers, managing products, pricing, promoting, enabling customers to discover products, facilitating payment, and ensuring delivery. By developing a comprehensive Retail Platform, we ensure that as customer touchpoints and devices evolve, we can consistently deliver seamless experiences. This adaptability allows us to respond flexibly without the need to overhaul our technology, thanks to the creation of capabilities we have built.

Posted 6 days ago

Apply

0.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

The HiLabs Story HiLabs is a leading provider of AI-powered solutions to clean dirty data, unlocking its hidden potential for healthcare transformation. HiLabs is committed to transforming the healthcare industry through innovation, collaboration, and a relentless focus on improving patient outcomes. HiLabs Team Multidisciplinary industry leaders Healthcare domain experts AI/ML and data science experts Professionals hailing from the worlds best universities, business schools, and engineering institutes including Harvard, Yale, Carnegie Mellon, Duke, Georgia Tech, Indian Institute of Management (IIM), and Indian Institute of Technology (IIT). Be a part of a team that harnesses advanced AI, ML, and big data technologies to develop cutting-edge healthcare technology platform, delivering innovative business solutions. Job Title : Python Web Scraper Job Location : Bengaluru, Karnataka Job summary: We are a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. We are looking for Python Web Scraper, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities: Design and build scalable, reliable web scraping solutions using Python/PySpark. Develop enterprise-grade scraping services that are robust, fault-tolerant, and production-ready. Work with large volumes of structured and unstructured data; parse, clean, and transform as required. Implement robust data validation and monitoring processes to ensure accuracy, consistency, and availability. Write clean, modular code with proper logging, retries, error handling, and documentation. Automate repetitive scraping tasks and optimize data workflows for performance and scalability. Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. Analyze and identify data sources relevant to business Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. Desired Profile: Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 2-4 years of experience in web scraping, data crawling, or data Proficiency in Python with web scraping tools and libraries (e.g., Beautiful Soup, Scrapy, or Selenium). Basic working knowledge of PySpark and data tools like Apache Airflow and EMR. Experience with cloud-based platforms (AWS, Google Cloud, Azure) and familiarity with cloud-native data tools like Apache Airflow and EMR. Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA) Familiarity with version control systems like Git. HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application. HiLabs Total Rewards Competitive Salary, Accelerated Incentive Policies, H1B sponsorship, Comprehensive benefits package that includes ESOPs, financial contribution for your ongoing professional and personal development, medical coverage for you and your loved ones, 401k, PTOs & a collaborative working environment, Smart mentorship, and highly qualified multidisciplinary, incredibly talented professionals from highly renowned and accredited medical schools, business schools, and engineering institutes. CCPA disclosure notice - https://www.hilabs.com/privacy

Posted 6 days ago

Apply

0 years

0 Lacs

Lucknow, Uttar Pradesh, India

On-site

Linkedin logo

Diraaz, is an upcoming Tea brand (start-up) in the B2b and D2c marketspaces. We are looking for sales specialists for B2b. We offer products such as, traditional tea leaves, herbal teas, ice-teas etc. Responsibilities Understand products and services that our company offers in depth in-order to understand how we can add value for our customers and their businesses. Identify potential clients, and our company's needs for revenue generation, brand building etc. at a particular stage. (It is important to identify clients based on our needs at any time, big, small, famous or non-famous) Discuss with reporting manager and develop a strategy to approach the selected parties. What will you offer? How will you offer? And at what prices? so that the conversation ratio stays above 50%. Schedule meetings with prospects, go through the sales cycle, meet them, follow up with them, influence them, meet them again, lure them with your offers and your presentation, all with an intent to close the deals. Finalize payment terms at the time of closing deal. Get the MoU signed. Track timely payments. Do it over and over. The company would expect a 1:10 ratio Salary : Revenue. (Be wise and real when negotiating your salary) Log day to day work in a software or other tools such as excel, CRM etc according to company policy. Qualifications Experience and commitment to do all of the above and more in the manner asked by the company. Effective computer, software tool operations for day-to-day work logging and data optimization + management. FMCG work experience. Salary Negotiable. Added performance-based incentives. Show more Show less

Posted 6 days ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Amazon WWR&R is comprised of business, product, operational, program, software engineering and data teams that manage the life of a returned or damaged product from a customer to the warehouse and on to its next best use. Our work is broad and deep: we train machine learning models to automate routing and find signals to optimize re-use; we invent new channels to give products a second life; we develop world-class product support to help customers love what they buy; we pilot smarter product evaluations; we work from the customer backward to find ways to make the return experience remarkably delightful and easy; and we do it all while scrutinizing our business with laser focus. WWR&R data engineering team at Amazon Hyderabad Development Center is an agile team whose charter is to deliver the next generation of Reverse Logistics data lake platform. As a member of this team, your mission will be to support massively scalable, distributed data warehousing, querying, reporting and decision-support system. We support a fast-paced environment where each day brings new challenges and opportunities. As a Support Engineer, you will play a pivotal role in ensuring the stability, compliance, and operational excellence of our enterprise Data Warehouse (DW) environment. In this role, you will be responsible for monitoring and maintaining production data pipelines, proactively identifying and resolving issues that impact data quality, availability, or timeliness. You’ll collaborate closely with data engineers and cross-functional teams to troubleshoot incidents, implement scalable solutions, and enhance the overall resilience of our data infrastructure. A key aspect of this role involves supporting our data compliance and governance initiatives, ensuring systems align with internal policies and external regulatory standards such as GDPR. You will help enforce access controls, manage data retention policies, and support audit readiness through strong logging and monitoring practices. You’ll also lead efforts to automate manual support processes, improving team efficiency and reducing operational risk. Additionally, you will be responsible for maintaining clear, up-to-date documentation and runbooks for operational procedures and issue resolution, promoting consistency and knowledge sharing across the team. We’re looking for a self-motivated, quick-learning team player with a strong sense of ownership and a ‘can-do’ attitude, someone who thrives in a dynamic, high-impact environment and is eager to make meaningful contributions to our data operations. Basic Qualifications 2+ years of software development, or 2+ years of technical support experience Bachelor's degree in engineering or equivalent Experience troubleshooting and debugging technical systems Experience scripting in modern program languages Experience with SQL databases (querying and analyzing) Preferred Qualifications Good to have experience with AWS technologies stack including Redshift, RDS, S3, EMR or similar solutions build around Hive/Spark etc Good to have experience with reporting tools like Tableau, OBIEE or other BI packages. Knowledge of software engineering best practices across the development lifecycle is a plus Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3005460 Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Lead Backend Engineer Company: Lighthouz AI (YC S24) Location: Remote Employment Type: Full-time About Lighthouz AI Lighthouz AI is automating the back office of freight finance with freight-native AI agents. We help freight brokers, 3PLs, and factoring companies process invoices, rate confirmations, and PoDs in seconds—not hours—by replacing manual audits and brittle RPA with intelligent automation. Our platform handles real-world document chaos—scanned and handwritten paperwork, NOAs, BOLs, emails, and portal logins—executing complex workflows automatically. The result: faster payments, fewer disputes, and 10x operational throughput. We’re a Y Combinator S24 company founded by a team with deep experience across AI, supply chain, and enterprise systems (Google, Georgia Tech, Progressive, Halliburton). At Lighthouz, we’re not just streamlining freight finance—we’re rebuilding it from the ground up. About The Role We're looking for a Backend Engineer with familiarity with front end tech to join our fast-growing engineering team. You'll play a key role in building and scaling the core infrastructure powering our AI agents—from data ingestion and document processing to audit logic and integration pipelines. You’ll work closely with our founders to ship production-grade systems that support high-volume freight workflows and power real-time financial decision-making. What You’ll Do 👉🏼 Design and build scalable backend systems to support AI-driven invoice and document processing 👉🏼 Integrate with freight industry systems—TMSs, FMS, factoring portals, and ERPs 👉🏼 Develop APIs, data pipelines, and audit engines that drive automation 👉🏼 Collaborate with AI/ML teams to productionize model outputs 👉🏼 Implement observability, performance monitoring, and logging across services 👉🏼 Help shape engineering best practices in a fast-paced, early-stage environment 👉🏼 Make frontend updates as necessary What We’re Looking For 👉🏼 5+ years experience building backend systems in production 👉🏼 Strong programming skills in Python, Node.js, and Flask 👉🏼 Deep understanding of REST APIs, microservices, and event-driven architectures 👉🏼 Hands-on experience with PostgreSQL, MongoDB, or other SQL/NoSQL systems 👉🏼 Comfort working in cloud environments like AWS, using Docker/Kubernetes 👉🏼 Strong communication and collaboration skills in a remote-first team 👉🏼 Self-starter mindset—you take ownership and move fast Nice to Have 👉🏼 Experience in freight, logistics, fintech, or document-heavy domains 👉🏼 Exposure to AI/ML workflows and integrating model outputs into backend systems 👉🏼 Familiarity with message queues (e.g. SQS, Kafka) 👉🏼 Background in building secure, enterprise-grade software 👉🏼 Familiarity with frontend javascript tech What We Offer 💰 Competitive salary 🌎 Fully remote 🛠️ Chance to shape foundational systems at an early-stage YC startup 🚀 Work on real-world problems with massive impact across freight and finance Join Us If you’re a backend engineer excited to build AI-powered systems that actually do things, not just analyze things—come build with us. Freight finance is messy. Let’s rebuild together Show more Show less

Posted 6 days ago

Apply

15.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

The candidate should be an experienced Cloud/Infrastructure Architect who can operate independently and professionally in an agile environment and should have a minimum of 15+ years of experience in IT with preferably 6+ years as a Cloud Architect in bridging business and technical requirements to deliver high-value solutions. Job Responsibilities & Skills Responsible for Cloud Architecture design by conducting comprehensive analysis of current infrastructure, applications, and business requirements to determine the optimal cloud architecture that can scale seamlessly, ensuring reliability and continuous service delivery. Should be able to evaluate the offerings of cloud service providers (CSPs) such as AWS, Azure, and GCP that best suit the solution needs and integrate security into aspects of the design architecture , including identity and access management (IAM), encryption, network security, and compliance controls. Should be able to develop migration strategies for applications, data and infrastructure from an on-premises environment to the cloud, including optimizing workloads for cloud environments and leveraging cloud-native features and best practices to enhance performance, scalability, and cost-efficiency. Strong Understanding of Security Controls for the cloud. Should be able to implement security controls (Encryption, MFA, Intrusion Detection/Prevention, network segmentation, IMA, end point security etc,), to demonstrate compliance to relevant industry standards as applicable (GDPR, HIPAA etc.,), deploy security monitoring and logging tools to detect and respond to security incidents, ensuring rapid threat identification and mitigation and perform regular security assessments and audits of the cloud infrastructure, applications, and configurations to identify vulnerabilities and areas for improvement Design disaster recovery (DR) and business continuity (BC) plans to mitigate the impact of outages, data loss, and other disruptions to business operations. Design around DevOps / Continuous Integration model, implement robust security measures and practice DevOps and agile principles Should have a good understanding of Cloud (Azure, AWS) cost/billing propositions and collaborate with relevant stakeholders to optimize cloud spending and manage budgets effectively. Strong understanding of Cloud Networking Experience leading Pre-sales/ Sales support as a Cloud Architect. Facilitate discussions, workshops, and planning sessions to make informed decisions about cloud adoption, architecture design, and technology selection. Creation of SOW / Proposal / Architecture design documents based on Customer requirements Document architectural designs, configurations, and implementation guidelines for reference and knowledge sharing. Providing mentorship and guidance to junior team members on cloud architecture principles, best practices, and technologies. Contributing to developing and maintaining cloud governance frameworks, policies, and : 15+ years of experience in IT with a Master s or bachelor's degree in engineering or a related field Strong client-facing Cloud Consulting experience Strong understanding of one of the Public Cloud platforms (Azure/AWS/GCP) Strong leadership skills & Communication skills Azure and/or AWS Architect or equivalent certification will be an added advantage (ref:hirist.tech) Show more Show less

Posted 6 days ago

Apply

2.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About The Role Snapmint DevOps team is looking for a DevOps Engineer with a passion for working on cutting edge technology and thrives on the challenge of building something new that will operate at massive scale. In this role, you will be responsible for building and maintaining DevOps related technologies. You will work alongside Software Engineers. Your work will have a high impact on making online retail purchases more affordable to 1B Indian consumers. You will completely be building and owning one of the areas of DevOps CI/CD, Scaling microservices and distributed applications using containers and Kubernetes or related technologies, db clusters, Data Lake platform, Centralized logging & monitoring, security. Job Description We are looking for a talented DevOps Engineer that will contribute to the success of Snapmint by partnering with Engineering, Operations, Stakeholders and Management. We are looking for a person who is committed to teamwork, who enjoys working on cutting edge technology in a fast-paced environment, is customer centric, and thrives on the challenge of building something new that will operate at a nation-wide scale. Basic Qualification Bachelor's or Masters degree in computer science or equivalent. Programming experience with at least one scripting language such as Python, Java script or Shell. 2-6 years of experience as a DevOps Engineer. Role & Responsibilities Proven success in communicating with users, other technical teams, and senior management to collect requirements, describe technical decisions and technical strategy. Knowledge of professional software engineering best practices for full software development life cycle, including coding standards, code reviews, source control management, continuous deployments, testing, information security and operations. Strong emphasis on building solutions to automate development and operational systems. Experience providing technical leadership and mentoring/training other engineering community on best practices and complex technical issues. Experience with Infrastructure as Code, using Cloud Formation, Terraform, or other tools. Experience in cloud native CI/CD workflows and tools, such as Jenkins, Bamboo, TeamCity, Code Deploy (AWS) and/or GitLab. Experience with micro-services and distributed applications, such as containers, Kubernetes. Experience with the full software development life cycle and delivery using Agile practices. Experience working with AWS technologies. Good analytical and problem-solving skills. Location : Gurugram. (ref:hirist.tech) Show more Show less

Posted 6 days ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

******Looking only for Hands on Developer; designers profiles would not be considered***** Must Have Skills: Must have over 4+ Years in Tibco BW Design and develop middleware solutions using Tibco Business Works Container Edition (BWCE), & BW 6.X tools. Develop and maintain Tibco BWCE/BW 6.x applications, ensuring high performance and reliability. Integrate Tibco BWCE/BW 6.X with various databases and external systems. Create and manage Tibco BWCE/BW 6.X processes, including error handling and logging. Conduct unit testing and support integration testing efforts. Troubleshoot and resolve issues related to Tibco BWCE and BW 6.X applications. Provide technical support and guidance to team members. Awareness of deployment in containerized environments and basic knowledge of Kubernetes commands. Familiarity with Tibco ActiveSpace, Tibco FTL,Tibco Mashery . Experience with monitoring tools such as Splunk and Dynatrace. Show more Show less

Posted 6 days ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Summary Job Description : We are seeking a Principal Software Engineer/Consultant with expertise in diagnosing and resolving complex issues across Frontend, Backend, Databases, and Infrastructure. This role requires a hands-on problem solver who can identify root causes, optimize system performance, and enhance application scalability while ensuring seamless user experience and system reliability. Issue Identification & Resolution : Troubleshoot and resolve critical application issues across Frontend (React/Angular), Backend Java, and Databases (SQL/NoSQL). Perform root cause analysis on performance bottlenecks, API failures, memory leaks, database locks, and network latencies. Act as the final escalation point for critical production incidents, ensuring timely : Incident resolution time, System downtime reduction, Recurrence of critical issues System Architecture & Performance Optimization : Optimize queries, API performance, caching to improve efficiency. Implement monitoring, logging, and automation tools to detect and prevent issues proactively. Provide solutions, design and enhance scalable, high-performance system architectures with a focus on resiliency and fault tolerance. KPIs : Query execution time, API response time, Load-handling efficiency, System uptime Deep Investigation & Debugging : Perform code reviews, trace logs, and profile system behavior to detect inefficiencies Identify and resolve race conditions, memory leaks, and deadlocks. Automate deployments, and enforce coding standards Work closely with engineering, DevOps, QA, and customer teams to ensure application stability. Lead code reviews and best practices adoption to maintain high engineering standards. Drive initiatives to improve system reliability, reduce downtime, and enhance user experience. KPIs : Debugging turnaround time, Performance improvement %, Reduction in production incident Customer & Business-Focused Engineering : Work closely with customers & internal teams to ensure business-critical issues are resolved quickly. Translate technical findings into clear, actionable recommendations Implement mechanisms and failover strategies to prevent major : Customer satisfaction (CSAT/NPS), Reduction in issue Candidate Profile : 10+ years of experience in full-stack development and problem-solving. Expertise in debugging, profiling, and optimizing applications at scale. Strong knowledge of system architecture, microservices, databases, cloud infrastructure (AWS/Azure/GCP). Experience with monitoring tools, log analysis, and performance tuning techniques. Ability to communicate technical findings clearly to engineers and business Join Monocept? Work on high-impact, mission-critical projects with top-tier customers. Solve complex, real-world engineering challenges at scale. Be part of a team that values technical excellence, collaboration, and continuous learning. (ref:hirist.tech) Show more Show less

Posted 6 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role Overview We are looking for experienced DevOps Engineers (4+ years) with a strong background in cloud infrastructure, automation, and CI/CD processes. The ideal candidate will have hands-on experience in building, deploying, and maintaining cloud solutions using Infrastructure-as-Code (IaC) best practices. The role requires expertise in containerization, cloud security, networking, and monitoring tools to optimize and scale enterprise-level applications. Key Responsibilities Design, implement, and manage cloud infrastructure solutions on AWS, Azure, or GCP. Develop and maintain Infrastructure-as-Code (IaC) using Terraform, CloudFormation, or similar tools. Implement and manage CI/CD pipelines using tools like GitHub Actions, Jenkins, GitLab CI/CD, BitBucket Pipelines, or AWS CodePipeline. Manage and orchestrate containers using Kubernetes, OpenShift, AWS EKS, AWS ECS, and Docker. Work on cloud migrations, helping organizations transition from on-premises data centers to cloud-based infrastructure. Ensure system security and compliance with industry standards such as SOC 2, PCI, HIPAA, GDPR, and HITRUST. Set up and optimize monitoring, logging, and alerting using tools like Datadog, Dynatrace, AWS CloudWatch, Prometheus, ELK, or Splunk. Automate deployment, configuration, and management of cloud-native applications using Ansible, Chef, Puppet, or similar configuration management tools. Troubleshoot complex networking, Linux/Windows server issues, and cloud-related performance bottlenecks. Collaborate with development, security, and operations teams to streamline the DevSecOps process. Must-Have Skills 3+ years of experience in DevOps, cloud infrastructure, or platform engineering. Expertise in at least one major cloud provider: AWS, Azure, or GCP. Strong experience with Kubernetes, ECS, OpenShift, and container orchestration technologies. Hands-on experience in Infrastructure-as-Code (IaC) using Terraform, AWS CloudFormation, or similar tools. Proficiency in scripting/programming languages like Python, Bash, or PowerShell for automation. Strong knowledge of CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or BitBucket Pipelines. Experience with Linux operating systems (RHEL, SUSE, Ubuntu, Amazon Linux) and Windows Server administration. Expertise in networking (VPCs, Subnets, Load Balancing, Security Groups, Firewalls). Experience in log management and monitoring tools like Datadog, CloudWatch, Prometheus, ELK, Dynatrace. Strong communication skills to work with cross-functional teams and external customers. Knowledge of Cloud Security best practices, including IAM, WAF, GuardDuty, CVE scanning, vulnerability management. Good-to-Have Skills Knowledge of cloud-native security solutions (AWS Security Hub, Azure Security Center, Google Security Command Center). Experience in compliance frameworks (SOC 2, PCI, HIPAA, GDPR, HITRUST). Exposure to Windows Server administration alongside Linux environments. Familiarity with centralized logging solutions (Splunk, Fluentd, AWS OpenSearch). GitOps experience with tools like ArgoCD or Flux. Background in penetration testing, intrusion detection, and vulnerability scanning. Experience in cost optimization strategies for cloud infrastructure. Passion for mentoring teams and sharing DevOps best practices. (ref:hirist.tech) Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Design, implement, and maintain scalable and highly available systems and infrastructure. Monitor, troubleshoot, and resolve incidents to ensure the optimal performance and availability of the applications. Candidate must have experience in Multiple DC, Migration strategy (AWS), cloud native understanding. Develop and maintain documentation related to system architecture, configuration, and processes. Participate in on-call rotations to provide support for production systems and handle critical incidents. Continuously evaluate and improve systems, processes, and tools to enhance reliability and (In order Strong troubleshooting and problem-solving skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced and dynamic environment. Proficiency in programming languages such as Python, Java, or Go. Knowledge of database technologies (SQL, NoSQL). Strong hands-on experience with kafka, Kong, nginx. Familiarity with monitoring and logging tools such as Prometheus, Splunk, ELKG stack, or similar. Strong knowledge of Linux/Unix systems fundamentals. Experience with CI/CD pipelines and related tools (Jenkins, GitLab CI/CD). Experience with cloud platforms such as AWS, Azure, or Google Cloud and relevant certifications. Hands-on experience with containerization technologies (Docker, Kubernetes). Knowledge of infrastructure-as-code tools such as Terraform or CloudFormation. Proficiency in HTML5, CSS3, and JavaScript (ES6+). Strong understanding of modern frontend frameworks/libraries such as React, Angular, or Vue.js. Knowledge of Node js for server-side JavaScript development. Experience with responsive design principles and mobile-first development. Familiarity with state management libraries/tools (Redux, Vuex, etc. Knowledge of RESTful APIs and asynchronous programming. Experience with version control systems, preferably Git. Ability to collaborate effectively in a team environment. Strong problem-solving and debugging skills. Familiarity with UI/UX principles and design tools (Sketch, Figma, etc. Understanding of browser compatibility issues and performance optimization techniques. Experience with testing frameworks such as Jest, Enzyme, or Cypress. Knowledge of CI/CD pipelines and automated testing. Understanding of web security principles and best practices. (ref:hirist.tech) Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

About Markovate At Markovate, we dont just follow trendswe drive them. We transform businesses through innovative AI and digital solutions that turn vision into reality. Our team harnesses breakthrough technologies to craft bespoke strategies that align seamlessly with our clients' ambitions. From AI consulting and Gen AI development to pioneering AI agents and agentic AI, we empower our partners to lead their industries with forward-thinking precision and unmatched expertise. This is a great opportunity to collaborate with top AI engineers in a fast-paced environment and gain hands-on experience across multiple AI/ML projects with real-world impact. Overview We are seeking a DevOps Engineer with experience in deployment and cloud infrastructure who can take ownership of the entire pipeline process. This role involves managing CI/CD, cloud infrastructure, automation, and Responsibilities : End-to-end pipeline management for DevOps. Automate CI/CD workflows (Jenkins, GitLab CI/CD, GitHub Actions). Manage cloud infrastructure (AWS, Azure, GCP) using Terraform, Ansible, and CloudFormation. Deploy and monitor Docker & Kubernetes environments. Set up monitoring & logging (Prometheus, Grafana, ELK, Datadog). Troubleshoot VMs with excellent Linux/Ubuntu expertise. Implement security best practices and ensure system Requirements : Minimum 3 years in DevOps and Cloud. Strong knowledge of Linux and UBUNTU. Knowledge of Python skills for automation & scripting. Hands-on experience with AWS, Azure, or GCP. Expertise in IaC (Terraform, Ansible), CI/CD, Docker, and Kubernetes. Experience in monitoring, logging, and security : Please note : This is a remote job. However, selected candidates will be expected to work from our Gurgaon office, a few times a month, if requested. (ref:hirist.tech) Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Roles And Responsibilities Build, test, and administer highly available container application platform clusters (e.g., RedHat OpenShift, Kubernetes, Docker Datacenter, etc.) Champion security by injecting it into the existing development workflow and every stage of software development, ensuring the entire infrastructure is secure Identify normal routines and repeatable tasks that are candidates for automation, and then create and support the deployment of automation using Ansible Work within complex software systems to isolate defects, reproduce defects, assess risk, and understand varied customer deployments Assist application teams with onboarding to container application platforms in areas such as resource requirements, capacity analysis, and troubleshooting support Azure provisioning, configuration management, storage management, network management, and virtualization Create the Continuous Integration (CI) and Continuous Deployment (CD) automation infrastructure to support the project engineering team Developing and improving standards for security (via security as code) across a continuous delivery environment and cloud-based production deployments Qualifications : Your Skills & Experience Must have : Hands-on experience with terraform. Ability to write reusable terraform modules. Must have : Hands-on Python and Unix shell scripting is required. Must have : Strong understanding of CI/CD Pipelines in a globally distributed environment using Git, Artifactory, Jenkins, Docker registry. Must have : Experience with GCP Services and writing cloud functions. Must have : Experience with GCP IAM. Must have : Knowledge of common GCP services, Logging, Log Sinks, PUB/SUB, Docker, GCS, etc. Nice to have: Good to have certification in GCP Associate or Professional certification. Nice to have: Hands-on experience with OPA Policy. Must have : Hands on knowledge of Helm charts Must have : Hands-on experience deploying and managing Kubernetes infrastructure with Terraform Enterprise. Ability to write reusable terraform modules. Must have : Certified Kubernetes Administrator (CKA) and/or Certified Kubernetes Application Developer (CKAD) is a plus Must have : Experience using Docker within container orchestration platforms such as GKE. Must have : Knowledge of setting up splunk Must have : Knowledge of Spark in GKE Must have : Establish connectivity and scaling elastically Must have : Knowledge of common GCP services Nice to have : Good to have certification in Kubernetes (ref:hirist.tech) Show more Show less

Posted 6 days ago

Apply

Exploring Logging Jobs in India

The logging job market in India is vibrant and offers a wide range of opportunities for job seekers interested in this field. Logging professionals are in demand across various industries such as IT, construction, forestry, and environmental management. If you are considering a career in logging, this article will provide you with valuable insights into the job market, salary range, career progression, related skills, and common interview questions.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Chennai

These cities are known for their thriving industries where logging professionals are actively recruited.

Average Salary Range

The average salary range for logging professionals in India varies based on experience and expertise. Entry-level positions typically start at INR 3-5 lakhs per annum, while experienced professionals can earn upwards of INR 10-15 lakhs per annum.

Career Path

A typical career path in logging may include roles such as Logging Engineer, Logging Supervisor, Logging Manager, and Logging Director. Professionals may progress from entry-level positions to more senior roles such as Lead Logging Engineer or Logging Consultant.

Related Skills

In addition to logging expertise, employers often look for professionals with skills such as data analysis, problem-solving, project management, and communication skills. Knowledge of industry-specific software and tools may also be beneficial.

Interview Questions

  • What is logging and why is it important in software development? (basic)
  • Can you explain the difference between logging levels such as INFO, DEBUG, and ERROR? (medium)
  • How do you handle log rotation in a large-scale application? (advanced)
  • Have you worked with any logging frameworks like Log4j or Logback? (basic)
  • Describe a challenging logging issue you faced in a previous project and how you resolved it. (medium)
  • How do you ensure that log files are secure and comply with data protection regulations? (advanced)
  • What are the benefits of structured logging over traditional logging methods? (medium)
  • How would you optimize logging performance in a high-traffic application? (advanced)
  • Can you explain the concept of log correlation and how it is useful in troubleshooting? (medium)
  • Have you used any monitoring tools for real-time log analysis? (basic)
  • How do you handle log aggregation from distributed systems? (advanced)
  • What are the common pitfalls to avoid when implementing logging in a microservices architecture? (medium)
  • How do you troubleshoot a situation where logs are not being generated as expected? (medium)
  • Have you worked with log parsing tools to extract meaningful insights from log data? (medium)
  • How do you handle sensitive information in log files, such as passwords or personal data? (advanced)
  • What is the role of logging in compliance with industry standards such as GDPR or HIPAA? (medium)
  • Can you explain the concept of log enrichment and how it improves log analysis? (medium)
  • How do you handle logging in a multi-threaded application to ensure thread safety? (advanced)
  • Have you implemented any custom log formats or log patterns in your projects? (medium)
  • How do you perform log monitoring and alerting to detect anomalies or errors in real-time? (medium)
  • What are the best practices for logging in cloud-based environments like AWS or Azure? (medium)
  • How do you integrate logging with other monitoring and alerting tools in a DevOps environment? (medium)
  • Can you discuss the role of logging in performance tuning and optimization of applications? (medium)
  • What are the key metrics and KPIs you track through log analysis to improve system performance? (medium)

Closing Remark

As you embark on your journey to explore logging jobs in India, remember to prepare thoroughly for interviews by honing your technical skills and understanding industry best practices. With the right preparation and confidence, you can land a rewarding career in logging that aligns with your professional goals. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies