Home
Jobs

4124 Logging Jobs - Page 40

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 7.0 years

0 Lacs

Andhra Pradesh

On-site

Support the ongoing operations tasks/incidents ensuring the platform availability to end users and applications. Additional responsibilities include platform optimization, logging & auditing enhancements and implementing new solutions within IAM space and along with other automation. Key Responsibilities Candidate will be responsible for Administration, implementation, and maintaining IAM based components and interfaces in the Identity and access management framework. IAM framework platform consists of the following products Broadcom/CA Siteminder, Broadcom/CA Directory, Broadcom/CA Identity Manager, Broadcom/CA Advance Authentication, Broadcom/CA API Gateway, Active Directory and Azure AD.  Candidate should have experience & capabilities to provide technical solutions to Production Support issues.  Identifying, Analyzing and providing a solution to the issues in the Shared Service - Security based servers of Production to Development environments  Should have good hands on experience with Security System Integration for Identity & Access Management solutions.  Review the IT artifacts and guide the team in accordance to the industry best standards.  Adhere to the timelines and guide the team in documenting and implementing the changes.  Leads through thought leadership, responsible for providing strategic direction to the Production Support processes, business strategy and growth initiatives.  Will be responsible for Production and Operational support activities, attending day-to-day user requests, application alert monitoring, health checks, and incident management, prepare and maintain operational activities documentation.  Full Accountability on support of Security Shared Services applications 24 X 7 by meeting SLA.  Willingness to work on rotational shifts including night shifts.  Good problem solving and analytical skills  Ability to work under pressure and co-ordinate with offshore team and provide guidance on Incident Management, Work Order Management, Change Management, Problem Ticket Management, Stakeholder communications, Role back plans, etc...  Co-ordination with enterprise infrastructure teams  Provide analytical and technical guidance to the team and recommend and/or take action to direct the analysis and solutions required,  Come up with a plan to automate operational activities.  The candidate will work closely with the other project team members to deliver the working solution and test process for the access management solution.  Good Communication Skills.  Participated in Disaster recovery, Identity Governance, Certificate Renewal activity for the organization  Ready to work in onsite/offshore model and shifts.  Scripting knowledge Unix/Perl.  Knowledge on monitoring tools Dynatrace synthetic, one agent, SiteScope/Sciencelogic, CA APM(Wily), SUMOLogic /logstash& TLeaf/Glassbox etc. Required Qualifications Degree in computer science, engineering, IT or equivalent technical degree. In-depth knowledge with at least 4-7 years of Operational/Support experience in the Identity & Access Management. Preferred Qualifications Knowledge on monitoring tools Dynatrace synthetic, one agent, SiteScope/Sciencelogic, CA APM(Wily), SUMOLogic /logstash& TLeaf/Glassbox etc. Experience with CI/CD pipelines and tools including, BitBucket, Jenkins, Ansible ,Jira, confluence . Good exposure of Containers including Kubernetes, Docker ,AWS, Azure. Good exposure on Application Architecture, App servers/ Webserver IIS/Tomcat. Experience on Azure AD, Azure MFA for identity lifecycle. About Our Company Ameriprise India LLP has been providing client based financial solutions to help clients plan and achieve their financial objectives for 125 years. We are a U.S. based financial planning company headquartered in Minneapolis with a global presence. The firm’s focus areas include Asset Management and Advice, Retirement Planning and Insurance Protection. Be part of an inclusive, collaborative culture that rewards you for your contributions and work with other talented individuals who share your passion for doing great work. You’ll also have plenty of opportunities to make your mark at the office and a difference in your community. So if you're talented, driven and want to work for a strong ethical company that cares, take the next step and create a career at Ameriprise India LLP. Ameriprise India LLP is an equal opportunity employer. We consider all qualified applicants without regard to race, color, religion, sex, genetic information, age, sexual orientation, gender identity, disability, veteran status, marital status, family status or any other basis prohibited by law. Full-Time/Part-Time Full time Timings (2:00p-10:30p) India Business Unit AWMPO AWMP&S President's Office Job Family Group Technology

Posted 6 days ago

Apply

5.0 years

0 Lacs

Andhra Pradesh

On-site

We are looking for a highly motivated and experienced individual with fresh ideas having passion & great acumen for quality engineering. Working experience in Java, experience of test automation including selenium web driver, web services is must. Good working knowledge of testing processes and quality assurance life cycle in Waterfall and Agile environment is must. You must have good knowledge of database systems and should be able to write basic and complex queries and have idea of advanced T-SQL concepts. You should be familiar with both Windows and UNIX environment. Key Responsibilities Troubleshooting & Incident Management – Writing the automation scripts for UI applications, Web services, ETL. Find opportunity for test automation in various applications. - Proactive Maintenance of existing automation test scripts - Upgrade the regression test suit after every release - Collaborate with members of the team and contractors/vendors on bridge calls to root cause analysis and then retest the critical defects in an expeditious manner. Monitor and report issues and work with the required team/vendor for quick resolution. Send test status reports to the leadership Should be open to learn new technologies/tools. Analysis - Recommend, create and implement automation test scripts as and where required in existing or new applications. Document solutions for problems/defects based upon comprehensive and thoughtful analysis of business goals, objectives, requirements and existing technologies. Processes, Standards & Best Practices - Defect management - logging, prioritizing and retesting defects Test management - Write Manual & Automation test scripts after coordination with BSA and Development teams. Required Qualifications Required Qualifications - Bachelor’s degree in Computer Science or similar field; or equivalent work experience. 5-8 years of relevant experience. - 5+ years of strong experience in Automation using Selenium Webdriver, Rest Assured 5+ years of experience in Java Programming. Strong hands on in TestNG, Data and Keyword driven frameworks, Page Object Model Strong knowledge of programming fundamentals & algorithms - UNIX Scripting - Working knowledge of GIT Strong database and SQL experience - Demonstrate strong analytical, problem solving and debugging skills. Effective written and verbal communication skills Preferred Qualifications Knowledge of any other scripting language Python/VB script/ Shell Script is a plus. Knowledge on tools like Playwright, Cypress, TestCafe, Puppeteer etc is an advantage. Mainframe knowledge is good to have. About Our Company Ameriprise India LLP has been providing client based financial solutions to help clients plan and achieve their financial objectives for 125 years. We are a U.S. based financial planning company headquartered in Minneapolis with a global presence. The firm’s focus areas include Asset Management and Advice, Retirement Planning and Insurance Protection. Be part of an inclusive, collaborative culture that rewards you for your contributions and work with other talented individuals who share your passion for doing great work. You’ll also have plenty of opportunities to make your mark at the office and a difference in your community. So if you're talented, driven and want to work for a strong ethical company that cares, take the next step and create a career at Ameriprise India LLP. Ameriprise India LLP is an equal opportunity employer. We consider all qualified applicants without regard to race, color, religion, sex, genetic information, age, sexual orientation, gender identity, disability, veteran status, marital status, family status or any other basis prohibited by law. Full-Time/Part-Time Full time Timings (2:00p-10:30p) India Business Unit AWMPO AWMP&S President's Office Job Family Group Technology

Posted 6 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

About Us At SentinelOne, we’re redefining cybersecurity by pushing the limits of what’s possible—leveraging AI-powered, data-driven innovation to stay ahead of tomorrow’s threats. From building industry-leading products to cultivating an exceptional company culture, our core values guide everything we do. We’re looking for passionate individuals who thrive in collaborative environments and are eager to drive impact. If you’re excited about solving complex challenges in bold, innovative ways, we’d love to connect with you. What are we looking for? SRE organization’s mission at SentinelOne (S1) is to keep our uptime promise to our customers by ensuring we meet our SLOs/SLAs, help our engineering teams ship software to our customers fast and with quality and ensure our customers are successful. In this job as Senior SRE, you will join the 'Core SRE' team at S1 and have an amazing opportunity to drive outcomes that improve reliability, stability and cost efficiency of S1’s 'Singularity Platform' – our largest customer facing service, which has over 12,000 B2B/B2G customers deployed across over 6 regions and 2 cloud service providers. Big projects that are upcoming that you could work on include e.g.: Monitoring and Observability Uplift, Logging Pipeline modernization, Toil automatisation and more! What will you do? We are looking to add a Senior SRE with prior extensive operations experience for a SaaS product, who can drive deployment re-architecture with focus on self-service and automation. Someone who has delivered SaaS products on multi-cloud, on-prem and air gapped environments, driven continuous delivery of software, has run incident post-mortems, has provided feedback to engineering architecture decisions and has automated repetitive operational tasks. You will join a like minded team of SRE’s who help run our operations smoothly at scale by building a platform on which S1’s services can run. If the thought of running a large scale cybersecurity platform on various cloud providers and air gapped environments excite you, you’ve found the right place! As a team we value good written communication skills, data driven decisions and a keen eye for continuous improvements. You’ll help simplify, have a passion for new ideas and know how to execute iteratively towards the final goal. We value candor and collaboration. What skills and knowledge should you bring? Several years of experience in running site reliability for SaaS products, running operations at a large scale and proven experience in leading design and architecture of infrastructure (cloud and on-prem combined) Multi-cloud experience, deep expertise with at least one of AWS/GCP/Azure platforms Production experience with orchestration systems like Kubernetes, Nomad or Mesos (We are a Kubernetes shop) Any experience with Rancher, Platform9 or other managed k8s providers is desired Familiarity with air gapped deployments on top of k8s Familiarity with Kafka and Redis Familiar with IaaC and tools (Terraform or Pulumi) Familiarity with CI and practical delivery using any of the major tools, familiarity with deployment strategies like blue green, rolling deploys, canary deploys and best practices around deployment automation (with tools like shipit or spinnaker) is desired Demonstrated Proficiency in at least 1 mainstream language (Python/GoLang/Ruby/etc) Familiarity with SecOps & Compliance processes and their touch points with SRE is desired Polyglot experience with other SRE tools – we integrate with more tools every day Keeping a pulse on latest SRE trends and Open Source Prior product building experience Apart from the above technical skills, following soft skills are required: Curiosity, fast-learning, pursuit to improvements, great communication Ability to work in a diverse and distributed team A self-starter that is passionate and motivated by new technologies and has empathy for legacy systems A quick learner that can navigate through unfamiliar programming languages, systems and processes Why Us? You will be joining a cutting-edge company, where you will tackle extraordinary challenges and work with the very best in the industry along with competitive compensation. Flexible working hours and hybrid/remote work model Flexible Time Off. Flexible Paid Sick Days. Global gender-neutral Parental Leave (16 weeks, beyond the leave provided by the local laws) Generous employee stock plan in the form of RSUs (restricted stock units) On top of RSUs, you can benefit from our attractive ESPP (employee stock purchase plan) Gym membership by Cultfit. Wellness Coach app, with 3,000+ on-demand sessions, daily interactive classes, audiobooks, and unlimited private coaching. Private medical insurance plan for you and your family. Life Insurance covered by S1 (for employees) Telemedical app consultation (Practo) Global Employee Assistance Program (confidential counseling related to both personal and work life matters) High-end MacBook or Windows laptop. Home-office-setup allowances (one time) and maintenance allowance. Internet allowances. Provident Fund and Gratuity (as per govt clause) NPS contribution (Employee contribution) Half yearly bonus program depending on the individual and company performance. Above standard referral bonus as per policy. Udemy Business platform for Hard/Soft skills Training & Support for your further educational activities/trainings Sodexo food coupons. SentinelOne is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. SentinelOne participates in the E-Verify Program for all U.S. based roles. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Who we are At Millipixels, we design impactful experiences for our clients worldwide in various domains and emerging technologies. With our primary design and development center and offices in the Chandigarh Tricity area in India, combined with FlexCampus spaces in the UK, Singapore, and the United States, we are on course to be a globally-relevant, distributed, full-service software solutions organization. We offer our employees the option to work remotely and choose the location that best meets their needs. Through our cloud-native infrastructure and processes, we have created, sustained, and build distributed, remote teams, untethered to a physical location. We ensure that our people don't have to compromise between their home and work lives by enabling our people to stay connected to their families while building fulfilling careers. The Role We are seeking a highly skilled and experienced Senior Backend Developer (NodeJS) to join our dynamic team and contribute to our exciting content pipeline project. As a Senior Backend Developer, you will play a pivotal role in developing cutting-edge applications using event-driven architecture and enhancing the functionality of our existing systems. Job Responsibilities Design, develop and maintain NodeJS/Typescript/JavaScript applications. Develop and maintain API and microservices -based applications. Collaborate closely with the Content Team to deliver the content pipeline project. Develop new applications utilizing event-driven architecture, ensuring efficiency and scalability. Enhance the functionality of existing applications to meet evolving business needs. Actively participate in technical discussions, providing valuable insights, and contributing to decision-making processes. Conduct code reviews and maintain high code quality standards throughout the development lifecycle. Implement robust logging and monitoring solutions to ensure application performance and stability. Write comprehensive unit tests to validate the functionality and reliability of the codebase. Communicate and collaborate effectively with cross-functional teams, fostering a culture of teamwork and innovation. Requirements Expert-level proficiency in Javascript & Typescript , with a proven track record of building server-side applications using Node.js. Extensive knowledge of AWS technologies, including S3 , ECS , and Cloudwatch , demonstrating the ability to architect and deploy scalable solutions on the AWS platform. Experience working with CI/CD pipelines , automating the build, test, and deployment processes. Familiarity with Kafka or any other message broker or queuing system would be a valuable asset. Previous experience in building event-driven microservices will be advantageous. Strong working experience with building and consuming REST APIs. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. At least five years of experience developing complex software applications using Node.js , Typescript , and JavaScript . Proven experience in developing API and micro-services based applications. Strong proficiency in TypeScript , JavaScript , and related libraries and frameworks like ExpressJS and NestJS . Experience with databases such as MongoDB , PostgreSQL , and MySQL . Understanding of agile development methodologies. Strong problem-solving skills and ability to work in a fast-paced environment. In-depth expertise with Cloud platforms, especially AWS (preferably), and Azure. In-depth expertise in Containerization and Container orchestration with Docker and Kubernetes Understanding of CI/CD and common DevOps models Proficiency in ReactJS, in addition to the above, will be highly preferable. Excellent communication and collaboration skills are an absolute requirement. We need you to be comfortable conversing in English directly with your counterparts in other countries. Benefits working at Millipixels Choose your working times - focus on delivering targets, not on time spent. Medical Health Insurance - Company Paid Health insurance for ₹500,000. Option to extend to the Spouse and/or other immediate dependents on cost. Regular Financial, Tax-Saving, and Healthcare Advice Sessions from Experts. Generous paid vacation, over the course of the year. Show more Show less

Posted 6 days ago

Apply

10.0 years

0 Lacs

India

Remote

Linkedin logo

About CoinCROWD At CoinCROWD, we're reimagining how crypto is spent in everyday life. Our flagship product, CROWD Wallet, is a secure, intuitive wallet designed to make digital currencies accessible, fast, and real-world ready. As an early-stage startup, we value speed, simplicity, and smart engineering especially when it comes to infrastructure. We're looking for someone who’s not only great at DevOps, but thinks like a builder, acts like an owner, and thrives in lean, fast-paced environments. What You’ll Own As the DevOps Manager, you’ll be responsible for designing and maintaining scalable, cost-efficient, and secure infrastructure on Google Cloud Platform (GCP), our cloud of choice. You’ll also drive DevOps best practices across the team and set up systems that support rapid product iterations without compromising on reliability or cost. You will: Own GCP infrastructure : Design, deploy, and manage production infrastructure using Terraform (IaC) and GKE. Set up and maintain CI/CD : Automate build and deployment pipelines using Cloud Build, GitHub Actions, and Cloud Deploy. Handle multiple deploy requests : Support rapid deployment cycles while maintaining stability and performance. Lead incident response and monitoring : Implement logging, alerting, and monitoring using Google Cloud Operations Suite (Stackdriver). Write smart automation : Build scripts in Python and Bash to automate repetitive tasks and improve DevOps workflows. Create and enforce process : Standardize deployment practices, access management, and cloud operations with a long-term mindset. Watch the budget : Optimize infrastructure for cost-efficiency and work closely with leadership to monitor GCP usage and budget. Mentor and lead: Manage and coach a small team of engineers, and help scale our DevOps capabilities as we grow. Who You Are 10+ years of experience in DevOps, Cloud Engineering, or Site Reliability Engineering. Minimum 5 years of hands-on experience in GCP (non-negotiable). Deep expertise in: GCP products: GKE, Compute Engine, Cloud SQL, VPC, IAM, Cloud Armor, Cloud Storage. Infrastructure-as-Code: Terraform (preferred), Pulumi or similar. CI/CD tools: Cloud Build, Cloud Deploy, GitHub Actions. Monitoring & alerting: Stackdriver (Google Cloud Monitoring & Logging). Scripting: Python (preferred), Bash. Experience working in early-stage environments with limited resources and tight deadlines. Proven ability to balance technical decisions with cost-conscious thinking. Strong communication skills and a collaborative, solution-oriented mindset. Process-oriented, reliable, and excited to help define and refine our DevOps practices. Why Join Us? You’ll be the first DevOps leader in the company. You’ll shape how we build and ship products. Work closely with the founding team and have a seat at the table for key decisions. Remote-first team, flexible hours, outcome > hours culture. Opportunity to make a direct impact in a high-growth, real-world crypto product. We're scrappy, honest, and ambitious and we want people who share that energy. Sound like your kind of challenge? Let’s build something meaningful, scalable, and real—together. Show more Show less

Posted 6 days ago

Apply

10.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior Backend Engineer – Python & Microservices Location: Remote Experience Required: 8–10+ years 🚀 About the Role: We’re looking for a Senior Backend Engineer (Python & Microservices) to join a high-impact engineering team focused on building scalable internal tools and enterprise SaaS platforms. You'll play a key role in designing cloud-native services, leading microservices architecture, and collaborating closely with cross-functional teams in a fully remote environment. 🔧 Responsibilities: Design and build scalable microservices using Python (Flask, FastAPI, Django) Develop production-grade RESTful APIs and background job systems Architect modular systems and drive microservice decomposition Manage SQL & NoSQL data models (PostgreSQL, MongoDB, DynamoDB, ClickHouse) Implement distributed data pipelines using Kafka, RabbitMQ, and SQS Apply best practices in rate limiting, security, performance optimisation, logging, and observability (Grafana, Datadog, CloudWatch) Deploy services in cloud environments (AWS preferred, Azure/GCP acceptable) using Docker, Kubernetes, and EKS Contribute to CI/CD and Infrastructure as Code (Jenkins, Terraform, GitHub Actions) ✅ Requirements: 8–10+ years of hands-on backend development experience Strong proficiency in Python (Flask, FastAPI, Django, etc.) Solid experience with microservices and containerised environments (Docker, Kubernetes, EKS) Expertise in REST API design, rate limiting, and performance tuning Familiarity with SQL & NoSQL (PostgreSQL, MongoDB, DynamoDB, ClickHouse) Experience with cloud platforms (AWS preferred; Azure/GCP also considered) CI/CD and IaC knowledge (GitHub Actions, Jenkins, Terraform) Exposure to distributed systems and event-based architectures (Kafka, SQS) Excellent written and verbal communication skills 🎯 Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science or a related field Certifications in Cloud Architecture or System Design Experience integrating with tools like Zendesk, Openfire, or similar chat/ticketing platforms Show more Show less

Posted 6 days ago

Apply

6.0 years

0 Lacs

Kanayannur, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure data ecosystem. You will play a crucial role in managing and optimizing data workflows across Azure platforms such as Azure Data Factory, Data Lake, Databricks, and Synapse. Your primary focus will be on building, maintaining, and monitoring data pipelines, ensuring high data quality, and supporting critical data operations. You'll also support visualization, automation, and CI/CD processes to streamline data delivery and reporting. Your Key Responsibilities Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure Data Factory (ADF), Databricks, and Azure Synapse for efficient data ingestion, transformation, and storage. ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources. Data Lake Management: Organize and manage structured and unstructured data in Azure Data Lake, ensuring performance and security best practices. Data Quality & Validation: Perform data profiling, validation, and transformation using SQL, PySpark, and Python to ensure data integrity. Monitoring & Troubleshooting: Use logging and monitoring tools to troubleshoot failures in pipelines and address data latency or quality issues. Reporting & Visualization: Work with Power BI or Tableau teams to support dashboard development, ensuring the availability of clean and reliable data. DevOps & CI/CD: Support data deployment pipelines using Azure DevOps, Git, and CI/CD practices for version control and automation. Tool Integration: Collaborate with cross-functional teams to integrate Informatica CDI or similar ETL tools with Azure components for seamless data flow. Collaboration & Documentation: Partner with data analysts, engineers, and business stakeholders, while maintaining SOPs and technical documentation for operational efficiency. Skills and attributes for success Strong hands-on experience in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks Solid understanding of ETL/ELT design and implementation principles Strong SQL and PySpark skills for data transformation and validation Exposure to Python for automation and scripting Familiarity with DevOps concepts, CI/CD workflows, and source control systems (Azure DevOps preferred) Experience in working with Power BI or Tableau for data visualization and reporting support Strong problem-solving skills, attention to detail, and commitment to data quality Excellent communication and documentation skills to interface with technical and business teamsStrong knowledge of asset management business operations, especially in data domains like securities, holdings, benchmarks, and pricing. To qualify for the role, you must have 4–6 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Azure Databricks: Experience in data transformation and processing using notebooks and Spark. Azure Data Lake: Experience working with hierarchical data storage in Data Lake. Azure Synapse: Familiarity with distributed data querying and data warehousing. Azure Data factory: Hands-on experience in orchestrating and monitoring data pipelines. ETL Process Understanding: Knowledge of data extraction, transformation, and loading workflows, including data cleansing, mapping, and integration techniques. Good to have Power BI or Tableau for reporting support Monitoring/logging using Azure Monitor or Log Analytics Azure DevOps and Git for CI/CD and version control Python and/or PySpark for scripting and data handling Informatica Cloud Data Integration (CDI) or similar ETL tools Shell scripting or command-line data SQL (across distributed and relational databases) What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job description: Job Description Role: The purpose of this role is to the first point of contact for the B2B users who call Wipro Service Desk to troubleshoot appropriate end user issues in line with Wipro’s Service Desk objectives ͏ Do: Be responsible for primary user support and customer service Respond to queries from all calls, portal, emails, chats from the client Become familiar with each client and their respective applications/ processes Learn fundamental operations of commonly-used software, hardware and other equipment Follow standard service desk operating procedures by accurately logging all service desk tickets using the defined tracking software Ensure that the scorecard is maintained as per SoW with respect to TAT, SLA & hits Manage all queries or escalate if not resolve as per the defined helpdesk policies and framework Regular MIS & resolution log management on queries raised Record events and problems and their resolution in logs Follow-up and update customer status and information Pass on any feedback, suggestions, escalations by customers to the appropriate internal team Identify and suggest improvements on processes, procedures etc. ͏ Deliver: No. Performance Parameter Measure 1. Service Desk Delivery Adherence to TAT, SLA as per SoW Minimal Escalation Customer Experience 2. Personal Attendance Documentation etc. ͏ ͏ Mandatory Skills: Service Desk Management . Experience: 3-5 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome. Show more Show less

Posted 6 days ago

Apply

2.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Linkedin logo

Software Development Engineer-Backend Location : Bengaluru Experience : 0–2 years About Pocket FM Pocket FM is India's leading audio streaming platform with a mission to redefine the way stories are consumed. We are building a personalized entertainment experience for the next billion users through a robust tech platform powered by data, intelligence, and audio innovation. Join us on our journey to scale storytelling to millions, one stream at a time. Role Overview We are looking for a motivated and curious SDE-1 to join our fast-paced engineering team. You will play a key role in building scalable backend services and APIs that power our audio platform. If you are passionate about backend development and have hands-on experience with Python or Golang , we’d love to connect with you. Responsibilities Build and maintain highly scalable backend systems using Python or Golang Develop RESTful APIs and services for Pocket FM’s core platform Work closely with product, design, and data teams to deliver seamless user experiences Optimize application performance and write clean, maintainable code Participate in code reviews, learning sessions, and continuous improvement initiatives Troubleshoot and debug production issues as part of a collaborative team Requirements Bachelor's degree in Computer Science, Engineering, or a related field 0–2 years of backend development experience in Python or Golang Strong understanding of computer science fundamentals (DSA, OOP, OS, networking) Familiarity with REST API development and version control (Git) Working knowledge of databases (e.g., PostgreSQL, MongoDB, Redis) Ability to write modular, reusable, and testable code Good communication skills and a proactive attitude Nice to Have Exposure to cloud platforms like AWS, GCP, or Azure Experience with container technologies like Docker Familiarity with monitoring, logging, or CI/CD tools Contributions to open-source projects or personal backend projects Show more Show less

Posted 6 days ago

Apply

6.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure data ecosystem. You will play a crucial role in managing and optimizing data workflows across Azure platforms such as Azure Data Factory, Data Lake, Databricks, and Synapse. Your primary focus will be on building, maintaining, and monitoring data pipelines, ensuring high data quality, and supporting critical data operations. You'll also support visualization, automation, and CI/CD processes to streamline data delivery and reporting. Your Key Responsibilities Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure Data Factory (ADF), Databricks, and Azure Synapse for efficient data ingestion, transformation, and storage. ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources. Data Lake Management: Organize and manage structured and unstructured data in Azure Data Lake, ensuring performance and security best practices. Data Quality & Validation: Perform data profiling, validation, and transformation using SQL, PySpark, and Python to ensure data integrity. Monitoring & Troubleshooting: Use logging and monitoring tools to troubleshoot failures in pipelines and address data latency or quality issues. Reporting & Visualization: Work with Power BI or Tableau teams to support dashboard development, ensuring the availability of clean and reliable data. DevOps & CI/CD: Support data deployment pipelines using Azure DevOps, Git, and CI/CD practices for version control and automation. Tool Integration: Collaborate with cross-functional teams to integrate Informatica CDI or similar ETL tools with Azure components for seamless data flow. Collaboration & Documentation: Partner with data analysts, engineers, and business stakeholders, while maintaining SOPs and technical documentation for operational efficiency. Skills and attributes for success Strong hands-on experience in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks Solid understanding of ETL/ELT design and implementation principles Strong SQL and PySpark skills for data transformation and validation Exposure to Python for automation and scripting Familiarity with DevOps concepts, CI/CD workflows, and source control systems (Azure DevOps preferred) Experience in working with Power BI or Tableau for data visualization and reporting support Strong problem-solving skills, attention to detail, and commitment to data quality Excellent communication and documentation skills to interface with technical and business teamsStrong knowledge of asset management business operations, especially in data domains like securities, holdings, benchmarks, and pricing. To qualify for the role, you must have 4–6 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Azure Databricks: Experience in data transformation and processing using notebooks and Spark. Azure Data Lake: Experience working with hierarchical data storage in Data Lake. Azure Synapse: Familiarity with distributed data querying and data warehousing. Azure Data factory: Hands-on experience in orchestrating and monitoring data pipelines. ETL Process Understanding: Knowledge of data extraction, transformation, and loading workflows, including data cleansing, mapping, and integration techniques. Good to have Power BI or Tableau for reporting support Monitoring/logging using Azure Monitor or Log Analytics Azure DevOps and Git for CI/CD and version control Python and/or PySpark for scripting and data handling Informatica Cloud Data Integration (CDI) or similar ETL tools Shell scripting or command-line data SQL (across distributed and relational databases) What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

5.0 - 10.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Hiring for Project Automation Engineer-Senior(Team Lead) Job Description: - Commissioning of automation projects comprising PLC, SCADA, servo, AC drive, and field instrumentation Participate in the inquiry at the pre-sales stage and help choose the right hardware and software required matching customers requirements Preparation of all the engineering drawings and documentation necessary for the project Development and Testing of PLC Logic & SCADA/ HMI according to clients requirement by studying BOM, I/O List, P&ID, Logic and Control Philosophy, Process Flow Diagram, Loop Drawings, Interlocks List and Critical Parameters Develop SCADA graphics with all advanced facilities like Alarm Configuration, Instrument and Process Faceplates, Data Logging, Live data Trends and Historical Trends, Batch and Periodic Report Generation, Trend Templates, System Configuration, Recipe files, and Local Messages Conduct F.A.T (Factory Acceptance Test) after completion of panel manufacturing Prepare Technical documentations like Annotation, S.O.P, Operating manuals for System and Loop drawings Participate in the commissioning and SAT (Site Acceptance Test) at the customer’s site Knowledge on PlantPAx systems, Batch Programming (as per ISA 88), SIS (process functional safety SIL2 and SIL3 systems) and Industry 4.0 solutions are highly recommended Location-Indore Fix Office Location Experience-Minimum 5-10 Years in Rockwell Automation Skills Required 1. Client handling 2. Strong technical knowledge. 4. The ability to work well both as part of team & individual. 5. Zeal to learn new things Interested candidate please share your resume at ankur.tiwari@ics-india.co.in also call to connect - 9109188512 Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Proven experience in managing and automating CI/CD pipelines using tools like Jenkins, Azure DevOps, or GitLab. Expertise in cloud platforms (AWS, Azure, GCP) and experience with services like compute , storage , API gateways. Strong proficiency in containerization technologies (Docker) and orchestration tools (Kubernetes,). In depth knowledge of monitoring, logging, and performance tuning using tools like . Experience managing and deploying microservices-based applications and ensuring high availability, scalability, and resilience. Familiarity with automation frameworks and scripting languages (Python, Bash, PowerShell) Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Strong Application Development Work Experience - Agile Environment Preferred Solid application design, coding, testing, maintenance and debugging skills Experience with Junit and Cucumber testing. Experience with APM Monitoring tools and logging tools like Splunk Proficiency with JIRA, Confluence (preferred). Expertise in development using Core Java, J2EE, XML, Web Services/SOA and used Java. frameworks - Spring, spring batch,Spring-boot, JPA, REST, MQ. Knowledgeable in developing RESTful micro services with technical stack, hands on experience in AWS Working with GIT/Bitbucket, Maven, Gradle, Jenkins tools to build and deploy code deployment to production environments. Hands on for CI/CD Kubernetes Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

P2-C1-TSTS Development Design, develop, and maintain Java-based microservices. Write clean, efficient, and well-documented code. Collaborate with other developers and stakeholders to define requirements and solutions. Participate in code reviews and contribute to team knowledge sharing. Microservices Architecture Understand and apply microservices principles and best practices. Design and implement RESTful APIs. Experience with containerization technologies (e.g., Docker) and orchestration (e.g., Kubernetes). Knowledge of distributed systems and service discovery. Experience with design patterns (e.g., circuit breaker pattern, proxy pattern). Deep understanding of distributed systems and service discovery. Testing & Quality Develop and execute unit, integration, and performance tests. Ensure code quality and adhere to coding standards. Debug and resolve issues promptly. Deployment & Monitoring Participate in the CI/CD pipeline. Deploy microservices to cloud platforms (e.g., AWS, Azure, GCP). Monitor application performance and identify areas for improvement. Programming Languages Proficiency in Java (J2EE, Spring Boot). Familiarity with other relevant languages (e.g., JavaScript, Python). Microservices Experience designing and developing microservices. Knowledge of RESTful APIs and other communication patterns. Experience with Spring Framework. Experience with containerization (Docker) and orchestration (Kubernetes). Databases Experience with SQL databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB). Familiarity with ORM frameworks (e.g., JPA, Hibernate). Cloud Platforms Experience with at least one cloud platform (e.g., AWS, Azure, GCP). Tools & Technologies Familiarity with CI/CD tools (e.g., Jenkins, Git). Knowledge of logging and monitoring tools (e.g., Splunk, Dynatrace). Experience with messaging brokers (e.g., Kafka, ActiveMQ). Other Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Experience working in Agile/Scrum environments. DevOps Experience with DevOps practices and automation. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

P2-C1-TSTS Development Design, develop, and maintain Java-based microservices. Write clean, efficient, and well-documented code. Collaborate with other developers and stakeholders to define requirements and solutions. Participate in code reviews and contribute to team knowledge sharing. Microservices Architecture Understand and apply microservices principles and best practices. Design and implement RESTful APIs. Experience with containerization technologies (e.g., Docker) and orchestration (e.g., Kubernetes). Knowledge of distributed systems and service discovery. Experience with design patterns (e.g., circuit breaker pattern, proxy pattern). Deep understanding of distributed systems and service discovery. Testing & Quality Develop and execute unit, integration, and performance tests. Ensure code quality and adhere to coding standards. Debug and resolve issues promptly. Deployment & Monitoring Participate in the CI/CD pipeline. Deploy microservices to cloud platforms (e.g., AWS, Azure, GCP). Monitor application performance and identify areas for improvement. Programming Languages Proficiency in Java (J2EE, Spring Boot). Familiarity with other relevant languages (e.g., JavaScript, Python). Microservices Experience designing and developing microservices. Knowledge of RESTful APIs and other communication patterns. Experience with Spring Framework. Experience with containerization (Docker) and orchestration (Kubernetes). Databases Experience with SQL databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB). Familiarity with ORM frameworks (e.g., JPA, Hibernate). Cloud Platforms Experience with at least one cloud platform (e.g., AWS, Azure, GCP). Tools & Technologies Familiarity with CI/CD tools (e.g., Jenkins, Git). Knowledge of logging and monitoring tools (e.g., Splunk, Dynatrace). Experience with messaging brokers (e.g., Kafka, ActiveMQ). Other Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Experience working in Agile/Scrum environments. DevOps Experience with DevOps practices and automation. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Strong Application Development Work Experience - Agile Environment Preferred Solid application design, coding, testing, maintenance and debugging skills Experience with Junit and Cucumber testing. Experience with APM Monitoring tools and logging tools like Splunk Proficiency with JIRA, Confluence (preferred). Expertise in development using Core Java, J2EE, XML, Web Services/SOA and used Java. frameworks - Spring, spring batch,Spring-boot, JPA, REST, MQ. Knowledgeable in developing RESTful micro services with technical stack, hands on experience in AWS Working with GIT/Bitbucket, Maven, Gradle, Jenkins tools to build and deploy code deployment to production environments. Hands on for CI/CD Kubernetes Show more Show less

Posted 6 days ago

Apply

7.0 - 9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Site Reliability is about combining development and operations knowledge and skills to help make the organization better. Whether you have a development background and are interested in learning more about operations or are a DevOps/Systems Engineer who is interested in developing internal tools – Cvent SRE can benefit from your skillsets. Ultimately, we are looking for passionate people who love learning and technology. BS, MS in Computer Science OR related technical degree required. We are responsible for ensuring that our platform is stable and healthy. We break down barriers by fostering developer ownership and empowering developers. We support them by building creative and robust solutions to operations problems. We use our background as generalists to work closely with product development teams from the early stages of design all the way through identifying and resolving production issues. We see the big picture. We help create and enforce standards while facilitating an agile and learning culture. We use SRE principals such as blameless postmortems and operational load caps to ensure we’re constantly improving our knowledge and maintaining a good quality of life. Overall, we’re passionate about automation, learning and participating in dynamic day to day work. Must Have: • 7-9 years of relevant experience • Experience with SDLC methodologies (preferably Agile software development methodology). • Experience with software development – Knowledge of Java/Python/Ruby is a must. Preferably good understanding of Object-Oriented Programming concepts. • Exposure to managing AWS services / operational knowledge of managing applications in AWS • Experience with configuration management tools such as Chef, Puppet, Ansible or equivalent • Solid Windows and Linux administration skills. • Working with APM, monitoring, and logging tools (New Relic, DataDog, Splunk) • Experience in managing 3 tier application stacks / Incident response • Experience with build tools such as Jenkins, CircleCI, Harness etc • Exposure to containerization concepts - docker, ECS, EKS, Kubernetes • Working experience with NoSQL databases such as MongoDB, couchbase, postgres etc • Self-motivation and the ability to work under minimal supervision is must. Good to Have : • F5 load balancing concepts • Basic understanding of observability & SLIs/SLOs • Message Queues (RabbitMQ). • Understanding of basic networking concepts • Experience with package managers such as nexus, artifactory or equivalent • Good communication skills • People management experience Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you’ll be doing... Design, build and maintain robust, scalable data pipelines and ETL processes Ensure high data quality, accuracy and integrity across all systems. Work with structure and unstructured data from multiple sources. Optimize data work flows for performance, reliability, and cost efficiency. Collaborate with analysts, data scientists to meet data needs Monitor,troubleshoot, and improve existing data systems and jobs Apply best practices in data governance, security and compliance . Use tools like Spark, Kafka, Airflow, SQL, Python and cloud platforms Stay updated with emerging technologies and continuously improve data infrastructure. What we’re looking for… You Will Need To Have Bachelor's degree or four or more years of work experience. Expertise in AWS Data Stack – Strong hands-on experience with S3, Glue, EMR, Lambda, Kinesis, Redshift, Athena, and IAM security best practices. Big Data & Distributed Computing – Deep understanding of Apache Spark (batch and streaming) large-scale data processing and analytics. Real-Time & Batch Data Processing – Proven experience designing, implementing, and optimizing event-driven and streaming data pipelines using Kafka and Kinesis. ETL/ELT & Data Modeling – Strong experience in architecting and optimizing scalable ETL/ELT pipelines for structured and unstructured data. Programming Skills – Proficiency in Scala and Java for data processing and automation. Database & SQL Optimization – Strong understanding of SQL and experience with relational (PostgreSQL, MySQL). Expertise in SQL query tuning, data warehousing and working with Parquet, Avro, ORC formats. Infrastructure as Code (IaC) & DevOps – Experience with CloudFormation, CDK, and CI/CD pipelines for automated deployments in AWS. Monitoring, Logging & Observability – Familiarity with AWS CloudWatch, Prometheus, or similar monitoring tools. API Integration – Ability to fetch and process data from external APIs and databases. Architecture & Scalability Mindset – Ability to design and optimize data architectures for high-volume, high-velocity, and high-variety datasets. Performance Optimization – Experience in optimizing data pipelines for cost and performance. Cross-Team Collaboration – Work closely with Data Scientists, Analysts, DevOps, and Business Teams to deliver end-to-end data solutions. Even better if you have one or more of the following: Agile & CI/CD Practices – Comfortable working in Agile/Scrum environments, driving continuous integration and continuous deployment. #TPDRNONCDIO Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 6 days ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure data ecosystem. You will play a crucial role in managing and optimizing data workflows across Azure platforms such as Azure Data Factory, Data Lake, Databricks, and Synapse. Your primary focus will be on building, maintaining, and monitoring data pipelines, ensuring high data quality, and supporting critical data operations. You'll also support visualization, automation, and CI/CD processes to streamline data delivery and reporting. Your Key Responsibilities Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure Data Factory (ADF), Databricks, and Azure Synapse for efficient data ingestion, transformation, and storage. ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources. Data Lake Management: Organize and manage structured and unstructured data in Azure Data Lake, ensuring performance and security best practices. Data Quality & Validation: Perform data profiling, validation, and transformation using SQL, PySpark, and Python to ensure data integrity. Monitoring & Troubleshooting: Use logging and monitoring tools to troubleshoot failures in pipelines and address data latency or quality issues. Reporting & Visualization: Work with Power BI or Tableau teams to support dashboard development, ensuring the availability of clean and reliable data. DevOps & CI/CD: Support data deployment pipelines using Azure DevOps, Git, and CI/CD practices for version control and automation. Tool Integration: Collaborate with cross-functional teams to integrate Informatica CDI or similar ETL tools with Azure components for seamless data flow. Collaboration & Documentation: Partner with data analysts, engineers, and business stakeholders, while maintaining SOPs and technical documentation for operational efficiency. Skills and attributes for success Strong hands-on experience in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks Solid understanding of ETL/ELT design and implementation principles Strong SQL and PySpark skills for data transformation and validation Exposure to Python for automation and scripting Familiarity with DevOps concepts, CI/CD workflows, and source control systems (Azure DevOps preferred) Experience in working with Power BI or Tableau for data visualization and reporting support Strong problem-solving skills, attention to detail, and commitment to data quality Excellent communication and documentation skills to interface with technical and business teamsStrong knowledge of asset management business operations, especially in data domains like securities, holdings, benchmarks, and pricing. To qualify for the role, you must have 4–6 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Azure Databricks: Experience in data transformation and processing using notebooks and Spark. Azure Data Lake: Experience working with hierarchical data storage in Data Lake. Azure Synapse: Familiarity with distributed data querying and data warehousing. Azure Data factory: Hands-on experience in orchestrating and monitoring data pipelines. ETL Process Understanding: Knowledge of data extraction, transformation, and loading workflows, including data cleansing, mapping, and integration techniques. Good to have Power BI or Tableau for reporting support Monitoring/logging using Azure Monitor or Log Analytics Azure DevOps and Git for CI/CD and version control Python and/or PySpark for scripting and data handling Informatica Cloud Data Integration (CDI) or similar ETL tools Shell scripting or command-line data SQL (across distributed and relational databases) What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Vadodara, Gujarat, India

On-site

Linkedin logo

About The Role We are looking for a Software Engineer who will be responsible for designing, developing, and maintaining modern web applications using .NET 6, React.js, and TypeScript . While the primary focus will be on our new application , there may be occasional requirements to support and maintain a legacy application built on older technologies. This role offers an exciting opportunity to work with a modern technology stack while gaining exposure to legacy systems. What You Will Do Develop and maintain modern applications using .NET 6, C# 10, Web API, and React.js. Collaborate with cross-functional teams to design scalable and efficient solutions. Write clean, maintainable code following best practices and coding standards. Perform unit testing and integration testing using NUnit, JEST, and React Testing Library. Monitor application health and performance using Serilog with ELK. Participate in code reviews and provide constructive feedback. Contribute to UI development using TypeScript, Ant Design, and Redux.js. Ensure application security and performance optimizations. Support the legacy application (.NET 4.x, C# 5, Oracle, ASPX, MVC, COM/VB6) when required. What You Will Need Primary Technology Stack (New Application) .NET 6, C# 10, .NET 6 Web API React.js, Redux.js, TypeScript, Ant Design HTML5, CSS3 (SASS) NUnit, JEST, React Testing Library Oracle Database Logging & Monitoring: Serilog with ELK Legacy Application (May Be Required Occasionally) .NET 4.x, C# 5 ASPX, MVC Oracle Database COM (VB6) Requirements 3–5 years of experience in .NET, C#, and modern front-end technologies. Experience in full-stack development (both frontend & backend). Strong knowledge of RESTful APIs and microservices. Experience with unit testing and test automation frameworks. Familiarity with Agile methodologies and SDLC best practices. Strong problem-solving and analytical skills. Willingness to work on legacy systems if required. Nice to Have Experience in migrating legacy applications to modern frameworks. Familiarity with cloud platforms (AWS, Azure, or GCP). Knowledge of containerization (Docker, Kubernetes). Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Patel Nagar, Delhi, India

Remote

Linkedin logo

The trend of working remotely has seen exponential growth, especially after the global shift in work culture post-2020. With Chandigarh emerging as a hotspot for startups, educational institutions, and tech development, there’s a growing number of opportunities for freshers and college students seeking work from home jobs in 2025. This blog is a complete guide for students and new graduates residing in or around Chandigarh who are looking for legitimate, flexible, and skill-building remote job opportunities . Whether you want to earn extra income during college, gain work experience, or build a professional portfolio, this post will help you discover the right path. Why Work from Home Jobs Are Ideal for Students and Freshers Flexible working hours for managing studies and work Zero commute means more time and energy saved Early exposure to professional environments and skills Opportunities to build a digital portfolio Chance to earn while learning Top Work from Home Jobs in Chandigarh for Freshers and College Students Here’s a list of remote job roles that are in demand in Chandigarh and open to students and freshers in 2025. Content Writing and Blogging Popularity: High Type: Freelance/Part-time Industries Hiring: EdTech, Digital Marketing, E-commerce, Startups Responsibilities: Writing articles, blog posts, and product descriptions Researching and editing content Incorporating SEO keywords Skills Needed: Proficient English writing Creativity and grammar SEO basics Tools To Learn: Grammarly SurferSEO Google Docs Expected Salary: ₹5,000–₹25,000/month (freelance or part-time) Online Tutoring Jobs Popularity: Rising rapidly Type: Freelance or part-time Industries Hiring: EdTech platforms like Byju’s, Vedantu, Chegg, Unacademy Subjects In Demand: Math, Physics, Chemistry Spoken English and Grammar Coding for Kids (Python, Java) Skills Needed: Strong grasp on academic subjects Good communication Teaching enthusiasm Platforms: Vedantu TutorMe Chegg Superprof Expected Salary: ₹200–₹800/hour or ₹15,000–₹40,000/month Social Media Management Popularity: High among college-goers Type: Freelance/Internship Industries Hiring: Influencers, startups, local businesses Tasks: Managing Instagram, Facebook, LinkedIn profiles Creating reels, posts, and stories Scheduling posts and engaging with followers Skills To Learn: Canva Buffer/Hootsuite Copywriting basics Expected Salary: ₹5,000–₹20,000/month Also Read: Genuine Work from Home Jobs in Ahmedabad Without Investment Data Entry & Online Surveys Popularity: Beginner-friendly Type: Part-time/Project-based Industries Hiring: E-commerce, Research, Admin support Requirements: Fast typing speed Attention to detail Basic MS Excel and Word Tools: Google Sheets Excel Online Form Builders Expected Salary: ₹6,000–₹15,000/month Graphic Design Internships Popularity: Medium to High Type: Internship/Freelance Industries Hiring: Design Agencies, E-commerce Brands, Startups Responsibilities: Creating logos, banners, posters, and social media creatives Working on brand identity projects Skills Needed: Adobe Illustrator, Photoshop Canva, Figma (for beginners) Learn From: Udemy, Coursera, Canva tutorials Expected Salary: ₹7,000–₹20,000/month Virtual Assistant Jobs Popularity: Emerging role for students Type: Part-time Industries Hiring: Coaches, Consultants, Solopreneurs Responsibilities: Managing calendars and emails Booking appointments Handling spreadsheets Key Tools: Trello, Google Calendar Zoom, Slack Expected Salary: ₹8,000–₹18,000/month Customer Service (Chat/Email Support) Popularity: Constant demand Type: Full-time/Part-time Industries Hiring: E-commerce, SaaS, Telecom Key Responsibilities: Responding to customer queries via email or chat Logging issues and resolving complaints Skills Needed: Strong communication Typing speed Patience and problem-solving Expected Salary: ₹10,000–₹22,000/month Affiliate Marketing & Influencer Collaborations Popularity: Ideal for students with social media following Type: Commission-based or freelance Industries Hiring: E-commerce, Health & Wellness, Tech Gadgets What You’ll Do: Promote products on Instagram, YouTube, WhatsApp Earn per sale or sign-up Platforms: Amazon Associates ClickBank ShareASale Potential Earnings: ₹2,000–₹30,000/month or more based on reach Also Read: Highest Paying Work from Home Jobs in Mumbai in 2025 Freelance Video Editing Popularity: Growing rapidly Type: Freelance/Internship Industries Hiring: YouTubers, Brands, Event Planners Skills Needed: Adobe Premiere Pro Final Cut Pro or CapCut Creativity and timing Good For: Mass communication/media students Creators looking to monetize Expected Salary: ₹8,000–₹30,000/month Transcription and Translation Jobs Popularity: Moderate Type: Freelance Industries Hiring: Medical, Legal, Academic, YouTubers Responsibilities: Listening and converting audio to text Translating documents or videos Languages In Demand: Hindi, Punjabi, Tamil, Bengali English to/from foreign languages like French, German Expected Salary: ₹200–₹1000/hour or per project Where to Find Remote Jobs in Chandigarh for Students & Freshers Top Platforms To Explore: CareerCartz – Updated with remote jobs suited for freshers Internshala – Ideal for internships and part-time work LinkedIn – Set filter to “Remote” and search by location Fiverr & Upwork – Great for freelance gigs Naukri.com & Indeed – Trusted job portals with WFH filters Essential Skills For Getting Hired In Remote Jobs Time Management: Balance studies and work efficiently Self-Motivation: Stay focused without constant supervision Communication Skills: Verbal and written clarity Technical Skills: Familiarity with common tools (Google Docs, Zoom, Canva) Willingness to Learn: Online courses, certifications, and workshops Online Certifications That Boost Your Hiring Chances Google Digital Garage – Digital Marketing HubSpot Academy – Inbound Marketing & CRM Canva Design School – Graphic Design Basics Coursera/Udemy – Content Writing & Blogging Microsoft Excel – Beginner to Advanced Best online courses Tips to Succeed in Your First Work from Home Job Set up a quiet and distraction-free workspace Stick to a daily routine and deadlines Use tools like Notion, Trello, or Google Keep to stay organized Always over-communicate with your employer or manager Keep learning and upgrading your skills Conclusion – Work from Home Jobs in Chandigarh for Freshers With countless opportunities opening up in the digital space, Chandigarh’s freshers and students are in a prime position to take advantage of work-from-home jobs in 2025. These roles are not just about earning money—they’re about gaining real-world experience, building portfolios, and developing skills that employers value. Whether you’re in college or a recent graduate, now is the perfect time to explore online jobs, start freelancing, or land an internship that sets the foundation for your career. Stay proactive, keep exploring opportunities on CareerCartz , and make your remote job journey a success! FAQs – Work from Home Jobs in Chandigarh for Freshers Can college students really get paid for working from home? Yes, many companies hire students for part-time roles, internships, and freelance gigs. What are the best part-time WFH jobs for students in Chandigarh? Content writing, online tutoring, graphic design, and social media management are great options. Are work-from-home jobs safe and legitimate? Yes, if you apply through trusted portals like CareerCartz, LinkedIn, or official company sites. Do I need experience to apply for these jobs? Most jobs for students and freshers require only basic skills and enthusiasm. No prior experience is needed for many roles. How many hours a week can a student work remotely? You can start with 10–20 hours per week, depending on your college schedule. Do I need a laptop to work from home? Yes, having a laptop and a stable internet connection is highly recommended for most roles. Can I work from home without any technical skills? Yes. Roles like content writing, virtual assistance, and data entry don’t require advanced tech skills. How do I get paid for freelance or part-time work? Payment is usually made via bank transfer, Paytm, or platforms like PayPal (for international gigs). Is freelancing a good career option for students? Absolutely. Freelancing builds your portfolio and can evolve into a full-time remote career. How can CareerCartz help students in Chandigarh? CareerCartz provides verified remote jobs, internships, and part-time opportunities specially tailored for freshers and college students. Related Posts: Top 10 Remote Customer Service Jobs You Can Start Today The Pros and Cons of Working Remote Data Entry Jobs How to Land Your First Remote Entry-Level Job: Tips and Tricks How to Thrive in Remote Customer Service Jobs: Tips for Success Best Remote Customer Success Jobs You Can Work From Anywhere Top Remote Front End Developer Jobs Hiring in 2025 Top 10 Work from Home Jobs in Delhi Hiring Now Legit Work From Home Jobs for Stepmoms: Real Opportunities & Flexible Roles in 2025 Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

About The Role The products/services of Eclat Engineering Pvt. Ltd. are being used by some of the leading institutions in India and abroad. Our services/Products are rapidly growing in demand We are looking for a capable and dynamic Senior DevOps engineer to help setup, maintain and scale the infrastructure operations. This Individual will have the challenging responsibility of channeling our IT infrastructure and offering customer services with stringent international standard levels of service quality. This individual will leverage the latest IT tools to automate and streamline the delivery of our services while implementing industry-standard processes and knowledge management. ** What Describes You Best** ● B.E / B.Tech / M.E / M.Tech / MCA / M.Sc.IT (if not should be able to demonstrate required skills) ● Overall 3+ years of experience in DevOps and Cloud operations specifically in AWS. ● Experience with Linux Administrator ● Experience with microservice architecture, containers, Kubernetes, and Helm is a must ● Experience in Configuration Management preferably Ansible ● Experience in Shell Scripting is a must ● Experience in developing and maintaining CI/CD processes using tools like Gitlab, Jenkins ● Experience in logging, monitoring and analytics ● An Understanding of writing Infrastructure as a Code using tools like Terraform ● Preferences - AWS, Kubernetes, Ansible Technical Knowledge & Skills Must haves: ● Knowledge of AWS Cloud Platform ● Good experience with microservice architecture, Kubernetes, helm and container-based technologies ● Hands-on experience with Ansible. ● Should have experience in working and maintaining CI/CD Processes. ● Hands-on experience in version control tools like GIT. ● Experience with monitoring tools such as Cloudwatch/Sysdig etc. ● Sound experience in administering Linux servers and Shell Scripting. ● Should have a good understanding of IT security and have the knowledge to secure production environments (OS and server software). Good to have: ● Preferred qualifications - AWS Certifications, Kubernetes Certifications, or the likes. ● Knowledge/experience in managing access controls, and secret management. ● Experience with Infrastructure as Code with Terraform/CloudFormation ● Good understanding of various benchmarking standards like CIS. ● Experience with Python scripting and writing SQL queries is an added bonus. ● Ability to mentor and develop more junior members. What will you Own Infrastructure and Deployment Automation: Design, implement, and maintain automation for infrastructure provisioning and application deployment.Own the CI/CD pipelines and ensure they are efficient, reliable, and scalable. System Monitoring and Performance: -Take ownership of monitoring systems and ensure the health and performance of the infrastructure.Proactively identify and address performance bottlenecks and system issues. Cloud Infrastructure Management: Manage cloud infrastructure (e.g., AWS, Azure, GCP) and optimize resource usage.Implement cost-saving measures while maintaining scalability and reliability. Configuration Management:Manage configuration management tools (e.g., Ansible, Puppet, Chef) to ensure consistency across environments.Automate configuration changes and updates. Security and Compliance:Own security policies, implement best practices, and ensure compliance with industry standards. Lead efforts to secure infrastructure and applications, including patch management and access controls. Collaboration with Development and Operations Teams: Foster collaboration between development and operations teams, promoting a DevOps culture. Be the go-to person for resolving cross-functional infrastructure issues and improving the development process. Disaster Recovery and Business Continuity:Develop and maintain disaster recovery plans and procedures.Ensure business continuity in the event of system failures or other disruptions. Documentation and Knowledge Sharing: Create and maintain comprehensive documentation for configurations, processes, and best practices.Share knowledge and mentor junior team members. Technical Leadership and Innovation: Stay up-to-date with industry trends and emerging technologies.Lead efforts to introduce new tools and technologies that enhance DevOps practices. Problem Resolution and Troubleshooting: Be responsible for diagnosing and resolving complex issues related to infrastructure and deployments.Implement preventive measures to reduce recurring problems. Performance Optimization:Continuously improve system performance and resource utilization. Conduct capacity planning and scalability assessments. Incident Response: Lead incident response activities, including root cause analysis and remediation.Be available for on-call support as needed. Why Join Us ● Be a part of our growth story as we aim to take leadership position in international markets ● Opportunity to manage and lead global teams and channel partner network ● Join technology innovators who believe in solving world-scale challenges to drive global knowledge-sharing ● Healthy work/life balance offering wellbeing initiatives, parental leave, career development assistance, required work infrastructure support Show more Show less

Posted 6 days ago

Apply

2.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Roles & Responsibility Ensure efficient handling of supplier queries and issues through meticulous logging and escalation procedures. Negotiate with suppliers to secure competitive pricing and favorable contract terms, optimizing cost-effectiveness. Execute bidding processes for indirect materials and services, ensuring efficiency and adherence to budgetary constraints. Conduct comprehensive market research, issue RFIs and RFPs, and analyze proposals to inform supplier selection and contract preparation. Perform advanced expenditure analyses to provide key stakeholders with valuable insights and strategic guidance. Coordinate agreement renewals or terminations with suppliers, ensuring seamless transitions and maintaining positive relationships. Lead annual price-update initiatives to uphold competitiveness and cost efficiency, aligned with strategic sourcing strategies. Develop a deep understanding of industry dynamics and supplier landscapes, ensuring optimal solution selection compliant with PMI Principles & Practices. Requirements 2-3 years experience in procurement Good knowledge of IT Procurement and price negotiation Willing to work from office (Bangalore) Exceptional communication skills Good sourcing knowledge Benefits Competitive salary and performance-based incentives. ESOP plan. Flexible work hours. Opportunities for career growth and advancement within a rapidly growing company. Dynamic and collaborative work environment with a diverse and talented team. Show more Show less

Posted 6 days ago

Apply

15.0 years

0 Lacs

Thane, Maharashtra, India

On-site

Linkedin logo

Hiring a Senior DevOps Leader for a High-Scale, Multi-Cloud Environment Finding the right Senior DevOps Leader for your organization, especially one with over 15 years of experience and a background in high-scale operations leveraging GitLab, Kubernetes, GCP, and AWS, is a critical undertaking. This role demands a unique blend of deep technical expertise, strategic thinking, and proven leadership capabilities. Here’s a comprehensive guide to what you should be looking for: Key Responsibilities to Expect: A Senior DevOps Leader in this context will be responsible for more than just managing infrastructure; they will be a strategic partner driving efficiency, innovation, and reliability across the organization. Strategic Leadership & Vision: Defining and executing a long-term DevOps strategy aligned with business objectives, particularly for high-scale and resilient systems. Driving the adoption of DevOps best practices, tools, and culture across engineering and operations teams. Leading architectural decisions for CI/CD, containerization, cloud infrastructure, and automation, ensuring scalability, security, and cost-effectiveness. Evaluating and integrating new and emerging technologies (e.g., AI in DevOps, advanced monitoring solutions) to enhance operational efficiency and system performance. Team Leadership & Development: Building, mentoring, and leading a high-performing team of DevOps engineers. Fostering a collaborative, innovative, and continuous improvement culture within the DevOps team and its interactions with other departments. Managing resource allocation, project prioritization, and performance management for the DevOps team. Technical Oversight & Execution: Overseeing the design, implementation, and management of robust CI/CD pipelines using GitLab CI. Leading the strategy and governance for Kubernetes deployments at scale, including cluster management, networking, security, and resource optimization across GCP (GKE) and AWS (EKS). Architecting and managing multi-cloud infrastructure (GCP and AWS), focusing on high availability, disaster recovery, security, and cost optimization. Championing Infrastructure as Code (IaC) practices using tools like Terraform or CloudFormation. Implementing and refining comprehensive monitoring, logging, and alerting strategies (e.g., using Prometheus, Grafana, ELK Stack, CloudWatch, Google Cloud's operations suite) to ensure system health and proactive issue resolution. Driving automation initiatives across all stages of the software development lifecycle. Collaboration & Communication: Working closely with development, operations, security, and product teams to streamline workflows and ensure seamless delivery of software. Communicating effectively with executive leadership, stakeholders, and technical teams regarding DevOps strategy, project status, risks, and performance metrics. Championing and enforcing security best practices (DevSecOps) throughout the development lifecycle. Operational Excellence & Governance: Establishing and tracking key DevOps metrics (e.g., deployment frequency, lead time for changes, mean time to recovery (MTTR), change failure rate). Ensuring compliance with industry standards and internal policies. Managing budgets and vendor relationships related to DevOps tools and cloud services. Essential Technical Leadership Skills: Beyond hands-on proficiency, a leader must demonstrate strategic application and governance of these technologies. GitLab: Strategic Implementation: Deep understanding of GitLab's full suite (beyond just CI/CD) for source code management, pipeline orchestration, security scanning, and package management in a large enterprise. Scalability & Performance: Experience in scaling GitLab infrastructure and optimizing its performance for a large number of users and projects. Automation & Integration: Proven ability to automate complex workflows and integrate GitLab with other development and operations tools. Kubernetes (K8s): Large-Scale Cluster Management: Expertise in designing, deploying, and managing multiple large-scale Kubernetes clusters on both GCP (GKE) and AWS (EKS). This includes experience with cluster upgrades, multi-tenancy, and resource quotas. Advanced Networking & Security: In-depth knowledge of Kubernetes networking (e.g., CNI, service mesh like Istio or Linkerd) and security best practices (e.g., pod security policies, network policies, secrets management, RBAC) in a high-scale, multi-cloud environment. Ecosystem & Tooling: Familiarity with the broader Kubernetes ecosystem, including Helm for package management, Prometheus/Grafana for monitoring, and tools for logging and tracing. GitOps: Experience implementing GitOps principles for managing Kubernetes configurations and applications. Google Cloud Platform (GCP) & Amazon Web Services (AWS): Multi-Cloud Strategy & Governance: Proven experience in developing and implementing multi-cloud strategies, including workload placement, data management, and consistent governance across GCP and AWS. Core Services Expertise: Deep understanding and experience with core compute, storage, networking, database, and security services on both platforms (e.g., AWS EC2, S3, VPC, RDS; GCP Compute Engine, Cloud Storage, VPC, Cloud SQL). Infrastructure as Code (IaC): Mastery of IaC tools like Terraform (preferred for multi-cloud) or CloudFormation (AWS-specific) for provisioning and managing infrastructure in both clouds. Cost Optimization & Management: Demonstrable experience in implementing cost optimization strategies and managing budgets effectively across both GCP and AWS at scale. Security & Compliance: Expertise in designing and implementing secure cloud architectures, adhering to compliance standards (e.g., SOC 2, ISO 27001, HIPAA if applicable) on both platforms. Migration Experience: Experience leading large-scale migrations to or between cloud platforms is highly desirable. General DevOps & SRE Principles: Automation: A strong automation mindset with proficiency in scripting languages (e.g., Python, Bash, PowerShell). Monitoring, Logging, and Observability: Experience designing and implementing comprehensive observability solutions for large-scale distributed systems. Site Reliability Engineering (SRE): Understanding and application of SRE principles for availability, reliability, performance, and incident response. DevSecOps: Proven ability to integrate security into all phases of the DevOps lifecycle. Why Netcore? Being first is in our nature. Netcore Cloud is the first and leading AI/ML-powered customer engagement and experience platform (CEE) that helps B2C brands increase engagement, conversions, revenue, and retention. Our cutting-edge SaaS products enable personalized engagement across the entire customer journey and build amazing digital experiences for businesses of all sizes. Netcore’s Engineering team focuses on adoption, scalability, complex challenges, and fastest processing. We use versatile tech stacks like streaming technologies and queue management systems such as Kafka , Storm , RabbitMQ , Celery , and RedisQ . Netcore strikes a perfect balance between experience and agility. We currently work with 5000+ enterprise brands across 18 countries , serving over 70% of India’s Unicorns , positioning us among the top-rated customer engagement & experience platforms. Headquartered in Mumbai, we have a global footprint across 10 countries , including the United States and Germany . Being certified as a Great Place to Work for three consecutive years reinforces Netcore’s principle of being a people-centric company — where you're not just an employee but part of a family. A career at Netcore is more than just a job — it’s an opportunity to shape the future. Learn more at netcorecloud.com . �� What’s in it for You? Immense growth and continuous learning. Solve complex engineering problems at scale. Work with top industry talent and global brands. An open, entrepreneurial culture that values innovation. Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

ValueLabs is seeking talented Java developers to join our growing team in Hyderabad (WFO -5 days) . Notice period-Immediate Joiner Shift timings: Malaysian shift(7 AM-4 PM)/ (8AM-5 PM) JD: • BS degree in Computer Science or a related technical field or equivalent practical experience 5+ years of related software engineering experience • Must be comfortable coding in the following server-side languages: Java, Python. Shift timings: Malaysian shift(7 AM-4 PM)/ (8AM-5 PM) JD: • BS degree in Computer Science or a related technical field or equivalent practical experience 5+ years of related software engineering experience • Must be comfortable coding in the following server-side languages: Java, Python. • Data & Storage: Strong experience in MongoDB (required), PostgreSQL (required), GraphDB (good to have), and NoSQL (good to have). • You have worked in a team contributing towards making automation happen and helping improve the Development/QA using CI/CD tools (BitBucket, Jenkins, Maven, Gradle). • Solid understanding of Git including branching and merging strategies. • Expertise in microservices architecture to design and build RESTful APIs • Knowledge on cloud platforms and deployment solutions (GCP, AWS). • Experience with Docker/Kubernetes/Openshift would be an asset. • Experience with Application performance monitoring software • Knowledge/experience with Application Logging, Monitoring, Performance Management such as (ELK, Prometheus, Grafana, Google Cloud Logging). • Experience with performance testing and load testing tools. • Data & Storage: Strong experience in MongoDB (required), PostgreSQL (required), GraphDB (good to have), and NoSQL (good to have). • You have worked in a team contributing towards making automation happen and helping improve the Development/QA using CI/CD tools (BitBucket, Jenkins, Maven, Gradle). • Solid understanding of Git including branching and merging strategies. • Expertise in microservices architecture to design and build RESTful APIs • Knowledge on cloud platforms and deployment solutions (GCP, AWS). • Experience with Docker/Kubernetes/Openshift would be an asset. • Experience with Application performance monitoring software • Knowledge/experience with Application Logging, Monitoring, Performance Management such as (ELK, Prometheus, Grafana, Google Cloud Logging). • Experience with performance testing and load testing tools. Show more Show less

Posted 6 days ago

Apply

Exploring Logging Jobs in India

The logging job market in India is vibrant and offers a wide range of opportunities for job seekers interested in this field. Logging professionals are in demand across various industries such as IT, construction, forestry, and environmental management. If you are considering a career in logging, this article will provide you with valuable insights into the job market, salary range, career progression, related skills, and common interview questions.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Chennai

These cities are known for their thriving industries where logging professionals are actively recruited.

Average Salary Range

The average salary range for logging professionals in India varies based on experience and expertise. Entry-level positions typically start at INR 3-5 lakhs per annum, while experienced professionals can earn upwards of INR 10-15 lakhs per annum.

Career Path

A typical career path in logging may include roles such as Logging Engineer, Logging Supervisor, Logging Manager, and Logging Director. Professionals may progress from entry-level positions to more senior roles such as Lead Logging Engineer or Logging Consultant.

Related Skills

In addition to logging expertise, employers often look for professionals with skills such as data analysis, problem-solving, project management, and communication skills. Knowledge of industry-specific software and tools may also be beneficial.

Interview Questions

  • What is logging and why is it important in software development? (basic)
  • Can you explain the difference between logging levels such as INFO, DEBUG, and ERROR? (medium)
  • How do you handle log rotation in a large-scale application? (advanced)
  • Have you worked with any logging frameworks like Log4j or Logback? (basic)
  • Describe a challenging logging issue you faced in a previous project and how you resolved it. (medium)
  • How do you ensure that log files are secure and comply with data protection regulations? (advanced)
  • What are the benefits of structured logging over traditional logging methods? (medium)
  • How would you optimize logging performance in a high-traffic application? (advanced)
  • Can you explain the concept of log correlation and how it is useful in troubleshooting? (medium)
  • Have you used any monitoring tools for real-time log analysis? (basic)
  • How do you handle log aggregation from distributed systems? (advanced)
  • What are the common pitfalls to avoid when implementing logging in a microservices architecture? (medium)
  • How do you troubleshoot a situation where logs are not being generated as expected? (medium)
  • Have you worked with log parsing tools to extract meaningful insights from log data? (medium)
  • How do you handle sensitive information in log files, such as passwords or personal data? (advanced)
  • What is the role of logging in compliance with industry standards such as GDPR or HIPAA? (medium)
  • Can you explain the concept of log enrichment and how it improves log analysis? (medium)
  • How do you handle logging in a multi-threaded application to ensure thread safety? (advanced)
  • Have you implemented any custom log formats or log patterns in your projects? (medium)
  • How do you perform log monitoring and alerting to detect anomalies or errors in real-time? (medium)
  • What are the best practices for logging in cloud-based environments like AWS or Azure? (medium)
  • How do you integrate logging with other monitoring and alerting tools in a DevOps environment? (medium)
  • Can you discuss the role of logging in performance tuning and optimization of applications? (medium)
  • What are the key metrics and KPIs you track through log analysis to improve system performance? (medium)

Closing Remark

As you embark on your journey to explore logging jobs in India, remember to prepare thoroughly for interviews by honing your technical skills and understanding industry best practices. With the right preparation and confidence, you can land a rewarding career in logging that aligns with your professional goals. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies