Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Andhra Pradesh
On-site
P2-C1-TSTS Development: Design, develop, and maintain Java-based microservices. Write clean, efficient, and well-documented code. Collaborate with other developers and stakeholders to define requirements and solutions. Participate in code reviews and contribute to team knowledge sharing. Microservices Architecture: Understand and apply microservices principles and best practices. Design and implement RESTful APIs. Experience with containerization technologies (e.g., Docker) and orchestration (e.g., Kubernetes). Knowledge of distributed systems and service discovery. Experience with design patterns (e.g., circuit breaker pattern, proxy pattern). Deep understanding of distributed systems and service discovery. Testing & Quality: Develop and execute unit, integration, and performance tests. Ensure code quality and adhere to coding standards. Debug and resolve issues promptly. Deployment & Monitoring: Participate in the CI/CD pipeline. Deploy microservices to cloud platforms (e.g., AWS, Azure, GCP). Monitor application performance and identify areas for improvement. Programming Languages: Proficiency in Java (J2EE, Spring Boot). Familiarity with other relevant languages (e.g., JavaScript, Python). Microservices: Experience designing and developing microservices. Knowledge of RESTful APIs and other communication patterns. Experience with Spring Framework. Experience with containerization (Docker) and orchestration (Kubernetes). Databases: Experience with SQL databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB). Familiarity with ORM frameworks (e.g., JPA, Hibernate). Cloud Platforms: Experience with at least one cloud platform (e.g., AWS, Azure, GCP). Tools & Technologies: Familiarity with CI/CD tools (e.g., Jenkins, Git). Knowledge of logging and monitoring tools (e.g., Splunk, Dynatrace). Experience with messaging brokers (e.g., Kafka, ActiveMQ) Other: Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Experience working in Agile/Scrum environments. DevOps: Experience with DevOps practices and automation. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 4 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview We seek a skilled and experienced Full Stack Engineering Manager ( Full hands-on) to join our dynamic team. The ideal candidate will be a seasoned professional with a deep understanding of mobile technologies, Web, Mobile, microservices architecture, Java and Spring Boot. As an architect, he/ she will play a key role in designing, implementing, and maintaining scalable and resilient solutions that align with our business objectives for the commerce platform. Responsibilities Architect, design, and implement enterprise solutions that are scalable, resilient, and high-performing. Define and enforce best practices for development, ensuring adherence to architectural principles and coding standards. Possesses a strong command of the web, mobile and microservices technologies. Stay abreast of industry trends and emerging technologies related to mobile application architecture. Collaborate with cross-functional teams, including developers, DevOps, and product owners, to ensure successful implementation of end-to-end solutions. Provide technical leadership and mentorship to development teams. Identify and address performance bottlenecks in architecture, optimizing for speed and efficiency. Implement and enforce security measures within architecture to ensure the integrity and confidentiality of data. Ensure compliance with industry regulations and standards. Implement monitoring and logging solutions to identify and address issues within the environment proactively. Troubleshoot and resolve complex issues. Create and maintain comprehensive documentation for mobile and microservices architecture, including design documents, technical specifications, and operational guidelines. Qualifications Experience with Mobile design and implementation using React Native, React JS and front-end technologies. Proven experience as a Mobile Architect with a strong background in Mobile technology. In-depth understanding of mobile and microservices design principles, patterns, and best practices. Solid knowledge of containerization and orchestration technologies Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field. Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position Summary Manager – Sr. DevSecOps, Product & Engineering (PxE) As a Sr. DevSecOps Engineer, you will actively engage in your engineering craft, taking a hands-on approach to multiple high-visibility projects. Your expertise will be pivotal in delivering solutions that delight customers and users, while also driving tangible value for Deloitte's business investments. You will leverage your extensive DevSecOps engineering craftsmanship and advanced proficiency across multiple programming languages, DevSecOps tools, and modern frameworks, consistently demonstrating your exemplary track record in delivering high-quality, outcome-focused CI/CD and automation solutions. The ideal candidate will be a role model and an engineering mentor, collaborating with cross-functional teams to design, develop, and deploy advanced software solutions Key Responsibilities: Outcome-Driven Accountability: Embrace and drive a culture of accountability for customer and business outcomes. Develop DevSecOps engineering solutions that solve complex automation problems with valuable outcomes, ensuring high-quality, lean, resilient and secure pipelines with low operating costs, meeting platform/technology KPIs. Technical Leadership and Advocacy: Serve as the technical advocate for DevSecOps modern practices, ensuring integrity, feasibility, and alignment with business and customer goals, NFRs, and applicable automation/integration/security practices— being r esponsible for designing and maintaining code repos, CI/CD pipelines, integrations (code quality, QE automation, security, etc.) and environments (sandboxes, dev, test, stage, production) through IaC, both for custom and package solutions, including identifying, assessing, and remediating vulnerabilities. Engineering Craftsmanship: Possess passion and experience as an individual contributor, responsible for the integrity and design of DevSecOps pipelines, environments, and technical resilience of implementations and design, while driving deployment techniques like Blue-Green, Canary to minimize down-time and enable A/B testing approaches. Be always hands-on and actively engage with engineers to ensure DevSecOps practices are understood and can be implemented, working with them closely during sprints, helping resolve any technical issues through to production operations (e.g., leading triage and troubleshooting production issues). Be self-driven to learn new technologies, experiment with engineers, and inspire the team to learn and drive application of those new technologies. Customer-Centric Engineering: Develop lean, and yet scalable and flexible, DevSecOps automations through rapid, inexpensive experimentation to solve customer needs, enabling version control, security, logging, feedback loops, continuous delivery, etc. Engage with customers and product teams to deliver the right solution for the product in the right way at the right time. Incremental and Iterative Delivery: Exhibit a mindset that favors action and evidence over extensive planning. Utilize a leaning-forward approach to navigate complexity and uncertainty, delivering lean, supportable, and maintainable solutions. Cross-Functional Collaboration and Integration: Work collaboratively with empowered, cross-functional teams including product management, experience, engineering, delivery, infrastructure, and security. Integrate diverse perspectives to make well-informed decisions that balance feasibility, viability, usability, and value. Foster a collaborative environment that enhances team synergy and innovation. Advanced Technical Proficiency: Possess deep expertise in modern software engineering practices and principles, including Agile methodologies, DevSecOps, Continuous Integration/Continuous Deployment. Act as a Role-Model, leveraging these techniques to optimize solutioning and product delivery, ensuring high-quality outcomes with minimal waste. Demonstrate proficiency in full lifecycle of product development, from conceptualization and design to implementation and scaling, with a focus on continuous improvement and learning. Domain Expertise: Quickly acquire domain-specific knowledge relevant to the business or product. Translate business/user needs into technical requirements and automations. Navigate various enterprise functions such as business and enabling areas as well as product, experience, engineering, delivery, infrastructure, and security to drive product value and feasibility as well as alignment with organizational goals. Effective Communication and Influence: Exhibit exceptional communication skills, capable of articulating complex technical concepts clearly and compellingly. Inspire and influence stakeholders at all levels through well-structured arguments and trade-offs supported by evidence, evaluations, and research. Create coherent narratives that align technical solutions with business objectives. Engagement and Collaborative Co-Creation: Engage and collaborate with stakeholders at all organizational levels, from team members to senior executives. Build and maintain constructive relationships, fostering a culture of co-creation and shared momentum towards achieving product goals. Align diverse perspectives and drive consensus to create feasible solutions Qualification Required: Education and Experience A bachelor’s degree in computer science, software engineering, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. Excellent software engineering foundation with deep understanding of OOP/OOD, functional programming, data structures and algorithms, software design patterns, code instrumentations, etc 8+ years proven experience with Python, Bash, PowerShell, JavaScript, C#, and Golang (preferred). 8+ years proven experience with CI/CD tools (Azure DevOps and GitHub Enterprise) and Git (version control, branching, merging, handling pull requests) to automate build, test, and deployment processes. 5+ years of hands-on experience in security tools automation SAST/DAST (SonarQube, Fortify, Mend), monitoring/logging (Prometheus, Grafana, Dynatrace), and other cloud-native tools on AWS, Azure, and GCP. 5+ years of hands-on experience in using Infrastructure as Code (IaC) technologies like Terraform, Puppet, Kubernetes (K8s), Azure Resource Manager (ARM), AWS Cloud Formation, and Google Cloud Deployment Manager. 3+ years of hands-on experience with cloud native services like Data Lakes, CDN, API Gateways, Managed PaaS, Security, etc. on multiple cloud providers like AWS, Azure and GCP is preferred. 3+ years of experience with AI/ML and GenAI is preferred. Deep understanding of methodologies like, XP, Lean, SAFe to deliver high quality products rapidly. General understanding of cloud providers security practices, database technologies and maintenance (e.g. RDS, DynamoDB, Redshift, Aurora, Azure SQL, Google Cloud SQL) General knowledge of networking, firewalls, and load balancers.Excellent interpersonal and organizational skills, with the ability to handle diverse situations, complex projects, and changing priorities, behaving with passion, empathy, and care Location : Hyderabad Shift timing – 11AM to 8PM How You Will Grow At Deloitte, we have invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people, and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 302607 Show more Show less
Posted 4 days ago
0 years
0 Lacs
Pune, Maharashtra, India
Remote
Software Engineer - Privacy and AI Governance Role purpose As a Senior software Engineer (Privacy and AI governance) you will help design, develop and operationalize the technical foundations of Accelya’s Privacy by Design and AI governance programs. You will report directly to Accelya’s Data Protection Officer/AI Governance lead supporting the DPO enforce data protection, AI accountability and privacy by design principles. This role will also have a dotted line to Accelya’s Chief Technology Officers to ensure alignment with technical standards, priorities and devops workflows. This individual will develop effective relationships within the technology division working closely with the technology leadership and being able to embed controls effectively without slowing innovation. As Senior software Engineer (Privacy and AI governance) you will spearhead efforts to ensure that privacy is embedded into our software products’ DNA and that Accelya’s AI initiatives meet the requirements of applicable regulations (e.g., GDPR, EU AI act), AI best practice (such as NIST AI RMF), Accelya trust principles, and customer expectations. Duties & Responsibilities Use automated data discovery tools to identify personal or sensitive data flows Design and develop internal tools, APIs, and automation scripts to support: Data privacy workflows (e.g. DSARs, data lineage, consent management) AI/ML governance frameworks (e.g. model cards, audit logging, explainability checks) Review technical controls for data minimization, purpose limitation, access control, and retention policies. Build integrations with privacy and cloud compliance platforms (e.g. OneTrust, AWS Macie and SageMaker governance tools). Collaborate with the AI/ML teams to establish responsible AI development patterns, including bias detection, transparency, and model lifecycle governance. Contribute to privacy impact assessments (PIAs) and AI risk assessments by providing technical insights. Create dashboards and monitoring systems to flag potential policy or governance violations in pipelines. Support the DPO with technical implementation of GDPR, CCPA, and other data protection regulations. Collaborate with legal, privacy, and engineering teams to prioritize risks and translate findings into clear, actionable remediation plans. Knowledge, Experience & Skills: Must-Haves: Proven software engineering experience, ideally in backend or systems engineering roles Strong programming skills (e.g. Python, Java, or TypeScript) Familiarity with data privacy and protection concepts (e.g., pseudonymization, access logging, encryption) Understanding of AI/ML lifecycle Experience working with cloud environments (especially AWS) Ability to translate legal/policy requirements into technical designs Nice-to-Haves: Experience with privacy or GRC tools (e.g., OneTrust, or BigID) Knowledge of machine learning fairness, explainability, and AI risk frameworks Exposure to data governance frameworks (e.g., NIST AI RMF, ISO/IEC 42001) Prior work with privacy-enhancing technologies (PETs), e.g., differential privacy or federated learning What do we offer? Open culture and challenging opportunity to satisfy intellectual needs Flexible working hours Smart working: hybrid remote/office working environment Work-life balance Excellent, dynamic and multicultural environment About Accelya Accelya is a leading global software provider to the airline industry, powering 200+ airlines with an open, modular software platform that enables innovative airlines to drive growth, delight their customers and take control of their retailing. Owned by Vista Equity Partners long-term perennial fund and with 2K+ employees based around 10 global offices, Accelya are trusted by industry leaders to deliver now and deliver for the future. The company´s passenger, cargo, and industry platforms support airline retailing from offer to settlement, both above and below the wing. Accelya are proud to deliver leading-edge technologies to our customers including through our partnership with AWS and through the pioneering NDC expertise of our Global Product teams. We are proud to enable innovation-led growth for the airline industry and put control back in the hands of airlines. For more information, please visit www.accelya.com Show more Show less
Posted 4 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description RESPONSIBILITIES Design and implement CI/CD pipelines for AI and ML model training, evaluation, and RAG system deployment (including LLMs, vectorDB, embedding and reranking models, governance and observability systems, and guardrails). Provision and manage AI infrastructure across cloud hyperscalers (AWS/GCP), using infrastructure-as-code tools -strong preference for Terraform-. Maintain containerized environments (Docker, Kubernetes) optimized for GPU workloads and distributed compute. Support vector database, feature store, and embedding store deployments (e.g., pgVector, Pinecone, Redis, Featureform. MongoDB Atlas, etc). Monitor and optimize performance, availability, and cost of AI workloads, using observability tools (e.g., Prometheus, Grafana, Datadog, or managed cloud offerings). Collaborate with data scientists, AI/ML engineers, and other members of the platform team to ensure smooth transitions from experimentation to production. Implement security best practices including secrets management, model access control, data encryption, and audit logging for AI pipelines. Help support the deployment and orchestration of agentic AI systems (LangChain, LangGraph, CrewAI, Copilot Studio, AgentSpace, etc.). Must Haves: 4+ years of DevOps, MLOps, or infrastructure engineering experience. Preferably with 2+ years in AI/ML environments. Hands-on experience with cloud-native services (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure ML) and GPU infrastructure management. Strong skills in CI/CD tools (GitHub Actions, ArgoCD, Jenkins) and configuration management (Ansible, Helm, etc.). Proficient in scripting languages like Python, Bash, -Go or similar is a nice plus-. Experience with monitoring, logging, and alerting systems for AI/ML workloads. Deep understanding of Kubernetes and container lifecycle management. Bonus Attributes: Exposure to MLOps tooling such as MLflow, Kubeflow, SageMaker Pipelines, or Vertex Pipelines. Familiarity with prompt engineering, model fine-tuning, and inference serving. Experience with secure AI deployment and compliance frameworks Knowledge of model versioning, drift detection, and scalable rollback strategies. Abilities: Ability to work with a high level of initiative, accuracy, and attention to detail. Ability to prioritize multiple assignments effectively. Ability to meet established deadlines. Ability to successfully, efficiently, and professionally interact with staff and customers. Excellent organization skills. Critical thinking ability ranging from moderately to highly complex. Flexibility in meeting the business needs of the customer and the company. Ability to work creatively and independently with latitude and minimal supervision. Ability to utilize experience and judgment in accomplishing assigned goals. Experience in navigating organizational structure. Show more Show less
Posted 4 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA The Cloud Managed Services Engineer (L3) is a seasoned engineering role, responsible for providing a managed service to clients by proactively identifying and resolving cloud-based incident and problems. Through pre-emptive service incident and resolution activities, as well as product reviews, operational improvements, operational practices, and quality assurance this role maintains a high level of service to clients. The primary objective of this role is to ensure zero missed service level agreement (SLA) conditions and is responsible for managing tickets of high complexity, conducts advanced and complicated tasks, and provides resolution to a diverse range of complex problems. This position uses considerable judgment and independent analysis within defined policies and practices and applies analytical thinking and deep technical expertise in achieving client outcomes, while coaching and mentoring junior team members across functions. The Cloud Managed Services Engineer (L3) may also contribute to / support on project work as and when required. What You'll Be Doing Key Responsibilities: Ensures that assigned infrastructure at the client site is configured, installed, tested, and operational. Performs necessary checks, apply monitoring tools and respond to alerts. Identifies problems and errors prior to or when it occurs and log all such incidents in a timely manner with the required level of detail. Assists in analysing, assigning, and escalating support calls. Investigates third line support calls assigned and identify the root cause of incidents and problems Reports and escalates issues to 3rd party vendors if necessary. Provides onsite technical support to clients and provide field engineering services to clients. Conducts a monthly random review of incidents and service requests, analyse and recommend improvement in quality. Provides continuous feedback to clients and affected parties and update all systems and/or portals as prescribed by the company. Proactively identifies opportunities for work optimization including opportunities for automation of work. May manage and implement projects within technology domain, delivering effectively and promptly per client agreed upon requirements and timelines. May work on implementing and delivering Disaster Recovery functions and tests. Performs any other related task as required. Knowledge and Attributes: Ability to communicate and work across different cultures and social groups. Ability to plan activities and projects well in advance, and takes into account possible changing circumstances. Ability to maintain a positive outlook at work. Ability to work well in a pressurized environment. Ability to work hard and put in longer hours when it is necessary. Ability to apply active listening techniques such as paraphrasing the message to confirm understanding, probing for further relevant information, and refraining from interrupting. Ability to adapt to changing circumstances. Ability to place clients at the forefront of all interactions, understanding their requirements, and creating a positive client experience throughout the total client journey. Academic Qualifications and Certifications: Bachelor's degree or equivalent qualification in Information Technology/Computing (or demonstrated equivalent work experience). Certifications relevant to the services provided (certifications carry additional weightage on a candidate’s qualification for the role). Relevant certifications such as (but not limited to) - VMware certified Professional: Data Centre Virtualization. VMware Certified Specialist – Cloud Provider. VMware Site Recovery Manager: Install, Configure, Manage. Microsoft Certified: Azure Architect Expert. AWS Certified: Solutions Architect Associate. Veeam Certified Engineer (VMCE). Rubrik Certified Systems Administrator. Zerto, pure, vxrail. Google Cloud Platform (gcp). Oracle Cloud Infrastructure (oci). Required Experience: Seasoned work experience. Seasoned experience required in Engineering function within a medium to large ICT organization. Seasoned experience of Managed Services. Excellent working knowledge of ITIL processes. Seasoned experience working with vendors and/or 3rd parties. Seasoned experience managing platforms including a combination of the following: Windows Server .Administration, Linux Server Administration, Virtualization Administration, Server Hardware and Storage Administration. Extensive experience with VMWare proficient with VMWare 6.0, 6.5, 6.7 and 7. Extensive VMware SRM\vROps\VMware TANZU\vCloud Director. ESXi management and troubleshooting issues. Performance and capacity management on VMWare servers and complex health check Work with engineers and project leads to implement new systems, policies, standards and practices Experience in managing enterprise level virtualization infrastructure using VMware product suites. Design, plan, implement, and administer UCS Cisco servers and storage technologies in support of underlying virtualization infrastructure Installing, configuring and integrating servers associated with a VMware virtual infrastructure Hands on Experience on HPE\Dell and Cisco UCS hardware. Good understanding of Storage Infra, Creating technical documents and providing RCA for critical issues. Storage L2 expertise on Pure,EMC, NetApp, Dell, HP etc. (SAN/NAS) Document server administration processes and procedures Good hands on experience on PCA(Private Cloud appliance), ZFS Storage and Linux. Extensive experience and in depth knowledge of NFS, iSCSI and FC technologies Experience in build, troubleshooting at PCA/OVMM end. Exceptional communication skill ESXI command base troubleshooting. Candidates with VMware certification. Troubleshooting ESX issues related to storage, network and performance issues Installing patches and updates to operating systems and applications to ensure optimal performance Creating disaster recovery plans and procedures for responding to major outages or natural disasters affecting the data centre Monitoring server performance to identify problems and make adjustments to improve performance Monitor and tune systems to achieve optimum performance levels Upgrade and Maintain VMWare ESX and virtual servers Work on major severity and complex server issues Performing DR activities using VMware SRM and ZERTO. Migrating Customer VMware infra to New Data centre using Zerto\VMware SRM. Drive RCA preparation and meeting SLA for RCA submission along with planning and implementing fixes / patches Proficient in controlling administrator rights on VM Infrastructure, Use of VMware Converter Enterprise and overview Ability to setup of resource pools and a VMware Distributed Resources Scheduler (DRS) cluster, Resource Management, Resource Monitoring along with VMware High Availability (HA) cluster Proven experience in ESX Server Installation and configuration of virtual switches, network connections and port groups configure Storage Expertise in Virtual Center Installation and configuration and Installation of its components Perform Level 3 Support Esx Operating system Demonstrable experience in Install / Upgrade Esx platform environment Validated understanding to support Level 3 OS monitoring Client Software, Backup and Recovery client, Client Software, automated Security, health checking client, OS monitoring Should have worked on supporting an enterprise class Data Center. Hands on experience with physical server installation The job responsibility also includes: Provide general and routine technical support to a broad range of installation, patching, configuration, and updates to virtual infrastructure requiring the ability to research, analyse, and resolve problems effectively to meet established performance metrics. Maintain documented (e.g. Disaster Recovery) procedures for OS Infrastructure. Collaborates and contributes with team members, software vendors, and other technical staff to develop, design, implement, and continuously improve systems. Active coordination with team members and other groups to effectively perform basic Windows Server and administration support for daily operations. Follows procedures and guidelines to install, patch, configure, customize, troubleshoot, upgrade, integrate, and maintain systems, software, network and port configuration, host-based firewalls, and peripherals. Expert knowledge on Virtualization: VMWare, Microsoft System Center / HyperV Administration Knowledge of TCP/IP, DHCP, DNS and Troubleshooting. Backup Technologies L2 (Rubrik,Veeam, NetBackup) Storage L2 expertise on Pure,EMC, NetApp, Dell, HP etc. (SAN/NAS) Familiarity with development, tools, languages, process, methods and troubleshooting of Microsoft solutions.= ITIL Process knowledge (Problem, Incident, Change Management) Preferred Skill sets from one of the below technologies. Good hands on experience on PCA(Private Cloud appliance), ZFS Storage and Linux. Extensive experience and in depth knowledge of NFS, iSCSI and FC technologies Troubleshoot performance and connectivity issues with PCA and Storage. Clone, snapshot and Template Creation on OVMM. Experience in PCA/OVMM management. Experience in build, troubleshooting at PCA/OVMM end. Experience in DR and Restore. Experience in troubleshooting performance issues. Experience in Managing/Zoning on Brocade and Cisco FC Switches Hands on experience in Firmware/OS upgrades of PCA/OVMM, OVS nodes and ZFS appliances. Good to having knowledge on HPE and Dell. Call logging with vendors Oracle for hardware/software issues. Capacity planning for new builds, DR and Cutover. Workplace type: On-site Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today. Show more Show less
Posted 4 days ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Title: Senior Data Engineer – Data Quality, Ingestion & API Development Mandatory skill set - Python, Pyspark, AWS, Glue , Lambda, CI CD Total experience - 8+ Relevant experience - 8+ Work Location - Trivandrum /Kochi Candidates from Kerala and Tamil Nadu prefer more who are ready to relocate to above work locations. Candidates must be having an experience in lead role related to Data Engineer Job Overview We are seeking an experienced Senior Data Engineer to lead the development of a scalable data ingestion framework while ensuring high data quality and validation. The successful candidate will also be responsible for designing and implementing robust APIs for seamless data integration. This role is ideal for someone with deep expertise in building and managing big data pipelines using modern AWS-based technologies, and who is passionate about driving quality and efficiency in data processing systems. Key Responsibilities • Data Ingestion Framework: o Design & Development: Architect, develop, and maintain an end-to-end data ingestion framework that efficiently extracts, transforms, and loads data from diverse sources. o Framework Optimization: Use AWS services such as AWS Glue, Lambda, EMR, ECS , EC2 and Step Functions to build highly scalable, resilient, and automated data pipelines. • Data Quality & Validation: o Validation Processes: Develop and implement automated data quality checks, validation routines, and error-handling mechanisms to ensure the accuracy and integrity of incoming data. o Monitoring & Reporting: Establish comprehensive monitoring, logging, and alerting systems to proactively identify and resolve data quality issues. • API Development: o Design & Implementation: Architect and develop secure, high-performance APIs to enable seamless integration of data services with external applications and internal systems. o Documentation & Best Practices: Create thorough API documentation and establish standards for API security, versioning, and performance optimization. • Collaboration & Agile Practices: o Cross-Functional Communication: Work closely with business stakeholders, data scientists, and operations teams to understand requirements and translate them into technical solutions. o Agile Development: Participate in sprint planning, code reviews, and agile ceremonies, while contributing to continuous improvement initiatives and CI/CD pipeline development (using tools like GitLab). Required Qualifications • Experience & Technical Skills: o Professional Background: At least 5 years of relevant experience in data engineering with a strong emphasis on analytical platform development. o Programming Skills: Proficiency in Python and/or PySpark, SQL for developing ETL processes and handling large-scale data manipulation. o AWS Expertise: Extensive experience using AWS services including AWS Glue, Lambda, Step Functions, and S3 to build and manage data ingestion frameworks. o Data Platforms: Familiarity with big data systems (e.g., AWS EMR, Apache Spark, Apache Iceberg) and databases like DynamoDB, Aurora, Postgres, or Redshift. o API Development: Proven experience in designing and implementing RESTful APIs and integrating them with external and internal systems. o CI/CD & Agile: Hands-on experience with CI/CD pipelines (preferably with GitLab) and Agile development methodologies. • Soft Skills: o Strong problem-solving abilities and attention to detail. o Excellent communication and interpersonal skills with the ability to work independently and collaboratively. o Capacity to quickly learn and adapt to new technologies and evolving business requirements. Preferred Qualifications • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. • Experience with additional AWS services such as Kinesis, Firehose, and SQS. • Familiarity with data lakehouse architectures and modern data quality frameworks. • Prior experience in a role that required proactive data quality management and API- driven integrations in complex, multi-cluster environments. Candidate those who are Interested please drop your resume to: gigin.raj@greenbayit.com MOB NO - 8943011666 Show more Show less
Posted 4 days ago
0.0 - 4.0 years
0 Lacs
Sarkhej, Ahmedabad, Gujarat
On-site
Job description Role & responsibilities The list of brief responsibilities required for the job, but not limited to the following: Participate in the inquiry at the pre-sales stage and help choose the right hardware and software required matching customers requirements Preparation of all the engineering drawings and documentation necessary for the project Development and Testing of PLC Logic & SCADA/ HMI according to clients requirement by studying BOM, I/O List, P&ID, Logic and Control Philosophy, Process Flow Diagram, Loop Drawings, Interlocks List and Critical Parameters Develop SCADA graphics with all advanced facilities like Alarm Configuration, Instrument and Process Faceplates, Data Logging, Live data Trends and Historical Trends, Batch and Periodic Report Generation, Trend Templates, System Configuration, Recipe files, and Local Messages Conduct F.A.T (Factory Acceptance Test) after completion of panel manufacturing Prepare Technical documentations like Annotation, S.O.P, Operating manuals for System and Loop drawings Participate in the commissioning and SAT (Site Acceptance Test) at the customers site Give Hands-on training on PLC and SCADA/HMI to clients after project completion, if needed Preferred candidate profile Minimum 4-8 Years of Experience in PLC, HMI, SCADA Programming Experience on Rockwell platforms is strongly recommended Should be able to understand and troubleshoot panel wiring for Motor Starters such as DOL, Star-Delta, VFD, Soft Starters etc Should be able to understand the wiring of field instruments such as Temperature, Pressure, Level and Flow Transmitters and Switches Knowledge on PlantPAx systems, Batch Programming (as per ISA 88), SIS (process functional safety SIL2 and SIL3 systems) and Industry 4.0 solutions are highly recommended Perks and benefits Competitive Salary Travel Allowance for Site Visits on top of all expenses Job Types: Full-time, Permanent Pay: ₹50,000.00 - ₹70,000.00 per month Schedule: Day shift Ability to commute/relocate: Sarkhej, Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Can you handle FAT, commissioning and site activities independently? What is your current notice period (days)? What is your current CTC (LPA)? What is your expected CTC (LPA)? Experience: Rockwell Automation (Allen Bradley) Hardware and Software: 4 years (Preferred) Willingness to travel: 50% (Preferred)
Posted 5 days ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Role: Join our dynamic team as a Senior Full Stack MERN Back-End Developer, where you'll drive the development of our core backend systems and micro services layer using Server-side JavaScript. You'll be pivotal in building scalable, performant, and data-driven applications using NEST.js, Express.js, GraphQL, and Azure Cloud, while also collaborating on the React.js front-end. We're looking for a seasoned developer with a deep understanding of backend architecture, API design, and cloud infrastructure, who can lead and contribute to complex projects. Job Description: Experience : 5-7 Years of Professional Software Development Key Responsibilities: NEST.js for Server-side JavaScript - Micro Services and APIs Expertise in NEST.js for Server-side JavaScript APIs and Micro Services development Best Practices in design and build of Controllers, Services and Modules using NEST.js Strong experience with using TypeScript and JavaScript for all server-side development Experience with patterns such as Interceptors and Middleware GraphQL API Mastery: Develop develop, and optimize high-performance GraphQL APIs using Apollo GraphQL or WunderGraph Cosmos. Design efficient schemas, resolvers, and data fetching strategies to ensure optimal performance and responsiveness. Implement advanced GraphQL features like subscriptions, defer/stream, and federation where applicable. Backend Engineering Leadership: Design and implement robust, scalable, and secure backend services using Express.js and Node.js. Focus on building RESTful and GraphQL APIs that serve as the backbone of our applications. Optimize backend performance, scalability, and reliability. Azure Cloud Expertise (Backend-Centric): Utilize Azure Cloud services, particularly Azure Functions, Azure Cosmos DB, and Azure App Service, to build and deploy backend services. Design and implement serverless architectures for scalable and cost-effective solutions. Optimize cloud resource utilization and implement best practices for security and reliability. Data Management & Integration: Design and manage data models and database schemas, with a focus on NoSQL databases and potentially relational databases. Implement data integration strategies between various systems and services. API Documentation & Governance: Create and maintain comprehensive API documentation using Swagger/OpenAPI and GraphQL schema documentation. Establish and enforce API design standards and best practices. Performance Tuning & Monitoring: Proactively monitor backend and API performance, identify bottlenecks, and implement optimizations. Utilize logging, monitoring, and tracing tools to diagnose and resolve production issues. CI/CD & DevSecOps (Backend Emphasis): Develop and maintain robust CI/CD pipelines for backend services, focusing on automated testing and deployment. Integrate DevSecOps practices to ensure secure and compliant backend deployments. Collaboration & Architectural Contribution: Participate in architectural discussions, contributing to the design and evolution of scalable backend architectures. Collaborate with front-end developers to ensure seamless integration between front-end and back-end systems. Front End Collaboration: Work in collaboration with front end developers to ensure proper data flow between front and backend. Understand the needs of the front end and provide them with the proper data structures and api calls. Must-Have Skills: NEST.js Expertise: Experience with NEST.js for server-side JavaScript and Micro Service / API development GraphQL Experience: Expert-level knowledge of GraphQL, including schema design, resolvers, and performance optimization. Backend Development Expertise: Deep proficiency in Express.js, Node.js, and backend architecture. Azure Cloud (Backend Focus): Extensive experience with Azure Functions, Azure Cosmos DB, and Azure App Service. API Design & Development: Strong understanding of RESTful and GraphQL API design principles. Database Expertise: Proficiency in NoSQL databases (e.g., MongoDB, Cosmos DB) and understanding of relational databases. TypeScript & JavaScript: Advanced coding skills in TypeScript and JavaScript. CI/CD & DevSecOps: Practical experience implementing and managing CI/CD pipelines and DevSecOps practices. Performance Optimization: Proven ability to diagnose and resolve performance issues. Testing: Strong Understanding of unit and integration testing. Good to Have Skills :Azure Serverless Technologies Expertise. Experience with Large Language Model (LLM) API Integration. Python Programming Proficiency. Experience with Relational Databases (PostgreSQL, MySQL). Experience in Creative Production Platforms. Who You Are :Bachelor’s degree in Computer Science, Engineering, or a related field. 5+ years of hands-on backend and API development experience. Strong understanding of computer science fundamentals. Excellent problem-solving and analytical skills. Ability to translate business requirements into technical solutions. Strong communication skills. Team player. Location :DGS India - Pune - Kharadi EON Free Zon eBrand :Dentsu Creativ eTime Type :Full tim eContract Type :Permanent Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
India
Remote
Job Description Do you like collaborating across teams to solve complex problems? Do you enjoy solving large scale distributed content delivery challenges? Join our Mapping SRE Team! Our team manages Akamai's mapping system, enhancing reliability, performance, observability, and change management. We define KPIs, improve monitoring, and resolve complex issues. Partner with the best As a Senior SRE, you'll enhance Akamai's Mapping service by improving performance, availability, and reliability. You'll define KPIs, refine monitoring, and resolve complex issues. As a Site Reliability Engineer Senior, you will be responsible for: Defining, managing, and analyzing SLIs and SLOs to measure system performance and availability effectively. Leveraging data analysis and statistical methods to identify performance trends, detect anomalies, and drive proactive optimizations. Designing and implementing automation frameworks to automate tasks to reduce manual effort and improve efficiency. Implementing and optimize monitoring, alerting, and logging tools to mitigate issues proactively and recommend improvements. Partnering with internal teams to diagnose and resolve incidents to ensure system reliability. Collaborating with product engineering teams to promote and implement system design teams to build scalable, resilient systems. Do What You Love To be successful in this role you will: Have a Bachelor's degree in Computer Science, Engineering, or related field Have 5+ years in SRE, DevOps, or related roles with a data-driven approach. Be expertise in SQL and statistical analysis for performance optimization. Have a proficiency in Python, Go, Bash, or similar languages. Be knowledgeable in networking, Linux, distributed systems, and design. Have experience with monitoring tools like Grafana and Prometheus. Build your career at Akamai Our ability to shape digital life today relies on developing exceptional people like you. The kind that can turn impossible into possible. We’re doing everything we can to make Akamai a great place to work. A place where you can learn, grow and have a meaningful impact. With our company moving so fast, it’s important that you’re able to build new skills, explore new roles, and try out different opportunities. There are so many different ways to build your career at Akamai, and we want to support you as much as possible. We have all kinds of development opportunities available, from programs such as GROW and Mentoring, to internal events like the APEX Expo and tools such as Linkedin Learning, all to help you expand your knowledge and experience here. Learn more Show more Show less
Posted 5 days ago
4.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
JOB DESCRIPTION - Senior React.js Developer - Work From Office - Trivandrum You will: Develop ReactJS Web video streaming apps for Browsers / TV and supporting metadata management platforms Responsible for the technical design and development of advanced video streaming consumer applications with multiple backend integrations Deliver app solutions across the entire app life cycle – prototype to build to test to launch to support Build sleek Web UI UX interfaces with a focus on usability features Optimize performance and stability as well as non-functional aspects such as logging Keep up to date on the latest industry trends and Device / OS / IDE updates You have: 4+ years of Experience in React.js Excellent command of the English language 3 or more years of experience in front-end web-based solutions in project services or product engineering organization Hands-on development experience using JavaScript, HTML, CSS, AJAX, JSON Good working knowledge of JavaScript Frameworks especially ReactJS Must be experienced in React 18 and have a basic understanding of React 19. Experience with application frameworks and in-depth knowledge of SOLID principles. Experience with developing highly polished consumer-facing websites with smooth interactivity and responsive behavior directly from design assets Published sites that are now online Self-motivation and the ability to manage your own time to get the job done at the high international quality levels we expect An engineering Degree in computer science or equivalent practical experience A solid understanding of browser system fundamentals, application performance optimization, and backend integration Prior experience working within the OTT apps, Media, E-commerce, Telecommunications, or similar large-scale consumer-facing industry is preferred Show more Show less
Posted 5 days ago
0 years
0 Lacs
Karnal, Haryana, India
On-site
About the company RENXO Technologies is a fast-growing startup building cutting-edge technology solutions across industries — from AI-powered systems to intelligent automation and immersive apps. We're passionate about innovation and creating solutions that make a real difference. As we scale up, we’re inviting motivated Software Development Interns to join our dynamic teams. This is a hands-on opportunity to contribute to live projects and core product development. You’ll work closely with experienced engineers and architects in a fast-paced, learning-rich environment. If you’re passionate about technology — whether it's writing AI models, programming microcontrollers, building mobile apps, designing intuitive web interfaces, or creating scalable, cloud-native backends — we want you on our team! Internship Tracks Please indicate your preferred area(s) of interest when applying: 1. AI/ML & Generative AI Develop ML and DL models for real-world applications. Work with large language models (LLMs), embeddings, vector databases, NLP, and GenAI tools. Build computer vision pipelines using OpenCV, YOLO, OCR, or pose estimation. Work on data analytics, prediction, prioritization, and optimization solutions. Exposure to HuggingFace, LangChain, PyTorch, or TensorFlow is a plus. 2. Embedded & Edge Development Work on low-level embedded systems using C, MicroPython, or CircuitPython. Develop firmware for sensors, controllers, and custom hardware on ESP32, Arduino, Raspberry Pi, or STM32. Interface with peripherals, and optimize power, memory, and timing. Exposure to real-time systems and protocols like I2C, SPI, UART, or CAN is a bonus. 3. Backend Development (Golang/Python) Design and develop scalable backend services and REST APIs. Work with databases (NoSQL and SQL) and cloud platforms. Implement CI/CD pipelines, logging, monitoring, and authentication systems. Collaborate closely with frontend and AI teams to deliver end-to-end features. 4. Mobile App Development (Android/iOS) Build mobile applications using Kotlin, SwiftUI, or Flutter. Translate Figma designs into high-performance mobile experiences. Integrate REST APIs, handle local storage, and optimize responsiveness. Learn best practices in app lifecycle management, performance tuning, and security. 5. Frontend & Web Development (SvelteKit) Build modern, responsive UIs using SvelteKit and TailwindCSS. Connect frontend with backend and AI services via APIs and WebSockets. Implement clean UX flows, animations, and state management. Work on admin panels, dashboards, and client-facing applications. Build Progressive Web Apps (PWAs) for mobile. 6. Game & 3D Development (Unity) Build interactive 3D views for business and consumer apps. Contribute to multiplayer game development: implement gameplay mechanics, animations, audio, and AI behaviors. Explore Unity’s physics, networking, and UI systems. Develop rapid prototypes or simulation tools for internal or client use. Responsibilities Contribute code to live customer projects and core product features. Work with and learn from senior developers and architects. Collaborate with cross-functional teams using hybrid-agile workflows. Learn and implement development best practices, tools, and patterns. Continuously explore new technologies and contribute ideas. Basic Qualifications Currently pursuing or recently completed a Bachelor’s degree in CS, IT, ECE, or a related field. Good problem-solving ability and a genuine curiosity for learning. Basic knowledge of programming languages such as Python, Go, C/C++/C#, Kotlin, Dart, Swift, or JavaScript. Understanding of software design fundamentals and version control (Git). Preferred Qualifications Personal, academic, or open-source projects in relevant fields. Familiarity with GitHub, Figma, Linux, or cloud platforms such as AWS or GCP. Prior internship, freelance, or hackathon experience. Strong communication and teamwork skills. 🚀 Why Join RENXO Technologies? Hands-on experience with real-world, cutting-edge products. Work on live customer projects and product development. Friendly, collaborative startup culture that encourages learning and ownership. Direct mentorship from senior engineers and architects. Pre-Placement Offers (PPOs) for deserving candidates. Paid internship. How to Apply Please fill out the internship application form https://forms.gle/U4C15wdUYaocZzh68 and submit it along with your resume (including links to your GitHub, portfolio, or project repositories). You may also include a short note or cover letter mentioning your areas of interest and what excites you about tech. We are an equal opportunity employer and value diversity at our company. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, or disability status. Show more Show less
Posted 5 days ago
0.0 - 3.0 years
0 Lacs
Coimbatore, Tamil Nadu
On-site
Job description Job Description We are looking for experienced iOS developers to join our team of talented engineers to design and build the next generation of our mobile application. This is a primary role of Software Engineering function within the Product Development team (TagMatiks). The individual should be self-motivated, creative and proactive, with 2 - 5 years of progressive experience to work successfully in a fast-paced environment including latest technologies like Object Oriented Design, Swift, Objective C, Cocoa Touch, and iOS UX guidelines. As a part of industry leading and patented TagMatiks product development, the individual will work closely with fellow team, product owner, solution architect, project manager and other stakeholders throughout the SDLC. Post: Senior iOS Developer Experience: 1- 3 years Location: India land SEZ, Coimbatore, Tamil Nadu Job Type: Full-time, Permanent Bachelor's Degree preferably in Computer Science, Information Technology, Computer Engineering, or related IT discipline; or equivalent experience. Essential Requirement: - 1 to 3 years’ experience with iOS, Objective C or Swift. - 2+ years object-oriented programming experience or equivalent education . - Experience in an iterative software development environment (Agile). - Experience with Test-Driven development, continuous integration and other Agile methodologies. - Good experience in using Apple’s XCode for software development. - Experience in standard debugging techniques such as logging, LLDB, and/or instruments to localize and correct software defects. - Understanding of common design patterns including Model-View-Controller. - Hands on experience using smartphones and tablets preferably iPhone and iPad - Strong understanding of ARC as it relates to Memory Management including the concepts of strong vs weak. - Ability to design, develop, support new and existing applications and perform unit testing and integration testing. - Experience with 3rd Party SDK integrations and other device libraries. - Experience packaging and publishing applications on the App Store. We are hiring an experienced and passionate iOS Developer to design, develop and enhance innovative and robust iOS applications with the rest of our ambitious dream team. Job Types: Full-time, Permanent Pay: ₹100,000.00 - ₹600,000.00 per year Schedule: Day shift Work Location: In person Job Type: Full-time Pay: ₹200,000.00 - ₹600,000.00 per year Work Location: In person
Posted 5 days ago
3.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Company Profile Our client is a global IT services company that helps businesses with digital transformation with offices in India and the United States. It helps businesses with digital transformation, provide IT collaborations and uses technology, innovation, and enterprise to have a positive impact on the world of business. With expertise is in the fields of Data, IoT, AI, Cloud Infrastructure and SAP, it helps accelerate digital transformation through key practice areas - IT staffing on demand, innovation and growth by focusing on cost and problem solving. Location & work – New Delhi (On –Site), WFO Employment Type - Full Time Profile – Platform Engineer Preferred experience – 3-5 Years The Role: We are looking for a highly skilled Platform Engineer to join our infrastructure and data platform team. This role will focus on the integration and support of Posit integration for data science workloads, managing R language environments, and leveraging Kubernetes to build scalable, reliable, and secure data science infrastructure. Responsibilities: Integrate and manage Posit Suite (Workbench, Connect, Package Manager) within containerized environments. Design and maintain scalable R environment integration (including versioning, dependency management, and environment isolation) for reproducible data science workflows. Deploy and orchestrate services using Kubernetes, including Helm-based Posit deployments. Automate provisioning, configuration, and scaling of infrastructure using IaC tools (Terraform, Ansible). Collaborate with Data Scientists to optimize R runtimes and streamline access to compute resources. Implement monitoring, alerting, and logging for Posit components and Kubernetes workloads. Ensure platform security and compliance, including authentication (e.g., LDAP, SSO), role-based access control (RBAC), and network policies. Support continuous improvement of DevOps pipelines for platform services. Must-Have Qualifications ● Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. Minimum 3+ years of experience in platform, DevOps, or infrastructure engineering. Hands-on experience with Posit (RStudio) products including deployment, configuration, and user management. Proficiency in R integration practices in enterprise environments (e.g., dependency management, version control, reproducibility). Strong knowledge of Kubernetes, including Helm, pod security, and autoscaling. Experience with containerization tools (Docker, OCI images) and CI/CD pipelines. Familiarity with monitoring tools (Prometheus, Grafana) and centralized logging (ELK, Loki). Scripting experience in Bash, Python, or similar. Preferred Qualifications Experience with cloud-native Posit deployments on AWS, GCP, or Azure. Familiarity with Shiny apps, RMarkdown, and their deployment through Posit Connect. Background in data science infrastructure, enabling reproducible workflows across R and Python. Exposure to JupyterHub or similar multi-user notebook environments. Knowledge of enterprise security controls, such as SSO, OAuth2, and network segmentation. Application Method Apply online on this portal or on email at careers@speedmart.co.in Show more Show less
Posted 5 days ago
5.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Role Very good knowledge of Log analysing & monitoring tools like Prometheus, Loki, Dynatrace, Grafana & SolarWinds. Understanding of Infrastructure environments Cloud, VMware, storage, networks, databases, etc. What you'll be responsible for? Design and development of security policies, standards, and procedures in accordance with organization goals. Responsible for proactive monitoring of alerts (Network, Infra, Applications) and taking corrective actions. Responsible for Incident Management life cycle & Service requests fulfilment Responsible for Incident logging, accurately tracks and documents all incidents. Adherence to the process compliance Adherence to the SLAs defined for the platform, Service uptime. Coordination with cross-group peers both proactively and reactively produces quality documentation and share with the appropriate team members. Responsible to develop SOP documents. Ability to deep dive into identifying the root cause of various service-impacting events and optimizing. Act as a First Point of Contact for incidents, escalations, and business-impacting technology issues To ensure the maximum possible service availability and performance of the platforms Responsible for continuous improvement of the process science. Qualification and other skills Experience of 5- 8 years in NOC Experience in Alert/Incident Management and a good understanding of SLAs Troubleshooting, Problem-solving & Strong presentation skills Analytical and communication skills What you'd have? Strong knowledge of Linux, Network & database querying Knowledge of asset management Very good knowledge of Log analysing & monitoring tools like Prometheus, Loki, Dynatrace, Grafana & SolarWinds Understanding of Infrastructure environments Cloud, VMware, storage, networks, databases, etc. Strong Linux, Networking, Log analysing, and database querying skills. Must have experience with monitoring tools like Prometheus, Loki, Grafana, and Dynatrace & building monitoring dashboards. Experience in alerts mitigation & optimization - Knowledge of the ITIL framework Hands-on exp with observability tools will be an added advantage. Must have expertise in maintaining/updating asset management. Certifications: ITIL foundation, AZ-900, Shell Scripting, Python, Hardware & networking. Why join us? Impactful Work: Play a pivotal role in safeguarding Tanla's assets, data, and reputation in the industry. Tremendous Growth Opportunities: Be part of a rapidly growing company in the telecom and CPaaS space, with opportunities for professional development. Innovative Environment: Work alongside a world-class team in a challenging and fun environment, where innovation is celebrated. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive environment for all employees https://www.tanla.com Show more Show less
Posted 5 days ago
1.0 years
3 Lacs
India
On-site
Role & Responsibilities Automate deployment pipelines (Netlify, Vercel, GitHub Actions or equivalent) Configure DNS, SSL/TLS, and CDN settings for custom domains Integrate and verify payment flows (Stripe test setup on /payments) Embed and validate third-party JS widgets (raffle widget) in site header/footer Wire site forms into Zapier → Google Sheets (CRM capture) Monitor uptime and alerts; maintain basic deployment dashboards Standardize logging and event capture (e.g. push deployment and payment events to Airtable) Write clear deployment and troubleshooting documentation Required Experience 1+ years in DevOps for web applications Hands-on with Netlify, Vercel, CI/CD tooling, and serverless functions Experience configuring DNS, SSL/TLS, and CDN rules Familiarity with Stripe integration and Zapier workflows Proficient in both automated and manual deployment verification Strong problem-solving skills and clear, independent communication Compensation & Next Steps Contract: 6-month @ INR 25,000/month (Apply only if fine with this) Immediate start required To Apply Email careers@alatreeventures.com with: Your resume Two links to relevant DevOps projects Show more Show less
Posted 5 days ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
TipsJob Title: Quantitative Trading Consultant – Operations & Trading Systems Location: Mumbai (In-office) Compensation: Up to ₹1,60,000 per month (₹10–20 LPA based on experience) Industry: Operations / Manufacturing / Production / Trading Type: Full-time | On-site Role Overview We are seeking a highly skilled and technically sound Quantitative Trading Consultant to lead the setup and execution of our mid-frequency and low-frequency trading desk. This role requires a deep understanding of trading infrastructure, execution systems, real-time data management, and risk control. You will be responsible for building the trading architecture from the ground up, collaborating with research and tech teams, and ensuring regulatory compliance in Indian financial markets. Key Responsibilities Infrastructure Setup: Design and implement end-to-end trading infrastructure: data servers, execution systems, broker/exchange connectivity. Real-Time Data Handling: Build and maintain real-time market data feeds using WebSocket APIs, ensuring minimal latency and high reliability. Strategy Development Framework: Establish frameworks and tools for backtesting, forward testing, and strategy deployment across multiple asset classes. Execution System Development: Develop low-latency, high-reliability execution code with robust risk and error-handling mechanisms. Risk Management: Design and implement real-time risk control systems, including position sizing, exposure monitoring, and compliance with SEBI/NSE/BSE regulations. Monitoring & Alerting: Set up systems using Prometheus, Grafana, and ELK stack for monitoring, logging, and proactive issue alerts. Team Collaboration: Work closely with quant researchers, DevOps, developers, and analysts to ensure smooth desk operations. Documentation & Compliance: Maintain detailed documentation of all infrastructure, workflows, trading protocols, and risk procedures. Ensure adherence to relevant regulatory guidelines. Required Skills & Qualifications Expert knowledge of quantitative trading, market microstructure, and execution strategy. Strong programming skills in Python, with working knowledge of C++ or Rust for performance-critical modules. Hands-on experience with WebSocket API integration, Kafka, Redis, and PostgreSQL/TimescaleDB/MongoDB. Familiarity with CI/CD tools, GitHub/GitLab, Docker, Kubernetes, and AWS/GCP cloud environments. Sound understanding of risk management frameworks and compliance in Indian markets. Excellent problem-solving and analytical thinking abilities. Strong attention to detail, documentation, and process adherence. Preferred Experience Previous experience in setting up or managing a quantitative trading desk (mid-frequency or low-frequency). Hands-on exposure to Indian equities, futures, and options markets. Experience working in a high-growth, fast-paced trading or hedge fund environment. Reporting Structure This role reports directly to senior management and works cross-functionally with technology, trading, and risk management teams. Why Join Us Opportunity to build and lead the trading infrastructure from the ground up. Work in a high-growth company with a strong focus on innovation and technology. Collaborate with top talent across trading, development, and research. Gain exposure to cutting-edge trading tools and modern cloud-native infrastructure. Skills: quantitative trading,attention to detail,python,c++,risk management,redis,problem-solving,rust,execution strategy,gitlab,kafka,docker,github,analytical thinking,market microstructure,mongodb,postgresql,ci/cd,aws,kubernetes,regulatory compliance,websocket api,gcp,timescaledb,monitoring,api,alerting Show more Show less
Posted 5 days ago
2.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Syensqo is all about chemistry. We’re not just referring to chemical reactions here, but also to the magic that occurs when the brightest minds get to work together. This is where our true strength lies. In you. In your future colleagues and in all your differences. And of course, in your ideas to improve lives while preserving our planet’s beauty for the generations to come. Job Overview And Responsibilities This position will be based in Pune, India. As the GCP/Azure Cloud Engineer, you will be responsible for designing, implementing and optimizing scalable, resilient cloud infrastructure of Google Cloud and Azure platform. This role involves deploying, automating and maintaining cloud-based applications, services and tools to ensure high availability, security and performance. The ideal candidate will have in-depth knowledge of GCP and Azure services and architecture best practices, along with strong experience in infrastructure automation, monitoring and troubleshooting. We count on you for: Design and implement secure, scalable and highly available cloud infrastructure using GCP/Azure services, based on business and technical requirements Develop automated deployment pipelines using Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation and GCP/Azure CDK, ensuring efficient, repeatable and consistent infrastructure deployments Implement and manage security practices such as Identity and Access Management, network security and encryption to ensure data protection and compliance with industry standards and regulations Design and implement backup, disaster recovery and failover solutions for high availability and business continuity Create and maintain comprehensive documentation of infrastructure architecture, configuration and troubleshooting steps and share knowledge with team members Close collaboration with multi-cloud enterprise architect, DevOps solution architect, Cloud Operations Manager to ensure quick MVP prior to pushing into production Keep up to date with new GCP/Azure services, features and best practices, providing recommendations for process and architecture improvements Education And Experience Bachelor's degree in Information Technology, Computer Science, Business Administration, or a related field. Master's degree or relevant certifications would be a plus. Minimum of 2-5 years of experience in cloud engineering, cloud architecture or infrastructure role Proven experience with GCP/Azure services, including EC2, S3, RDS, Lambda, VPC, IAM and CloudFormation Hands-on experience with Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation or GCP/Azure CDK Strong scripting skills in Python, Bash or PowerShell for automation tasks Familiarity with CI/CD tools (eg: Gitlab CI/CD, Jenkins) and experience integrating them with GCP/Azure Knowledge of networking fundamentals and experience with GCP/Azure VPC, security groups, VPN and routing Proficiency in monitoring and logging tools such as native cloud tools or third-party tools like Datadog and Splunk Cybersecurity Expertise: Understanding of cybersecurity principles, best practices, and frameworks. Knowledge of encryption, identity management, access controls, and other security measures within cloud environments. Preferably with certifications such as GCP/Azure Certified DevOps Engineer, GCP/Azure Certified SysOps Administrator, GCP/Azure Certified Solutions Architect Skills And Behavioral Competencies Excellent problem solving and troubleshooting abilities Result orientation, influence & impact Empowerment & accountability with the ability to work independently Team spirit, building relationships, collective accountability Excellent oral and written communication skills for documenting and sharing information with technical and non-technical stakeholders Language skills English mandatory What’s in it for the candidate Be part of and contribute to a once-in-a-lifetime change journey Join a dynamic team that is going to tackle big bets Have fun and work at a high pace About Us Syensqo is a science company developing groundbreaking solutions that enhance the way we live, work, travel and play. Inspired by the scientific councils which Ernest Solvay initiated in 1911, we bring great minds together to push the limits of science and innovation for the benefit of our customers, with a diverse, global team of more than 13,000 associates. Our solutions contribute to safer, cleaner, and more sustainable products found in homes, food and consumer goods, planes, cars, batteries, smart devices and health care applications. Our innovation power enables us to deliver on the ambition of a circular economy and explore breakthrough technologies that advance humanity. At Syensqo, we seek to promote unity and not uniformity. We value the diversity that individuals bring and we invite you to consider a future with us, regardless of background, age, gender, national origin, ethnicity, religion, sexual orientation, ability or identity. We encourage individuals who may require any assistance or accommodations to let us know to ensure a seamless application experience. We are here to support you throughout the application journey and want to ensure all candidates are treated equally. If you are unsure whether you meet all the criteria or qualifications listed in the job description, we still encourage you to apply. Show more Show less
Posted 5 days ago
9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
📈 Experience: 9+ Years 📍 Location: Pune 📢 Immediate to 15 days and are highly encouraged to apply! 🔧 Primary Skills: Data Engineer, Lead, Architect, Python, SQL, Apache Airflow, Apache Spark, AWS (S3, Lambda, Glue) Job Overview We are seeking a highly skilled Data Architect / Data Engineering Lead with over 9 years of experience to drive the architecture and execution of large-scale, cloud-native data solutions. This role demands deep expertise in Python, SQL, Apache Spark, Apache Airflow , and extensive hands-on experience with AWS services. You will lead a team of engineers, design robust data platforms, and ensure scalable, secure, and high-performance data pipelines in a cloud-first environment. Key Responsibilities Data Architecture & Strategy Architect end-to-end data platforms on AWS using services such as S3, Redshift, Glue, EMR, Athena, Lambda, and Step Functions. Design scalable, secure, and reliable data pipelines and storage solutions. Establish data modeling standards, metadata practices, and data governance frameworks. Leadership & Collaboration Lead, mentor, and grow a team of data engineers, ensuring delivery of high-quality, well-documented code. Collaborate with stakeholders across engineering, analytics, and product to align data initiatives with business objectives. Champion best practices in data engineering, including reusability, scalability, and observability. Pipeline & Platform Development Develop and maintain scalable ETL/ELT pipelines using Apache Airflow , Apache Spark , and AWS Glue . Write high-performance data processing code using Python and SQL . Manage data workflows and orchestrate complex dependencies using Airflow and AWS Step Functions. Monitoring, Security & Optimization Ensure data reliability, accuracy, and security across all platforms. Implement monitoring, logging, and alerting for data pipelines using AWS-native and third-party tools. Optimize cost, performance, and scalability of data solutions on AWS. Required Qualifications 9+ years of experience in data engineering or related fields, with at least 2 years in a lead or architect role. Proven experience with: Python and SQL for large-scale data processing. Apache Spark for batch and streaming data. Apache Airflow for workflow orchestration. AWS Cloud Services , including but not limited to: S3, Redshift, EMR, Glue, Athena, Lambda, IAM, CloudWatch Strong understanding of data modeling, distributed systems, and modern data architecture patterns. Excellent leadership, communication, and stakeholder management skills. Preferred Qualifications Experience implementing data platforms using AWS Lakehouse architecture. Familiarity with Docker , Kubernetes , or similar container/orchestration systems. Knowledge of CI/CD and DevOps practices for data engineering. Understanding of data privacy and compliance standards (GDPR, HIPAA, etc.). Show more Show less
Posted 5 days ago
2.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Syensqo is all about chemistry. We’re not just referring to chemical reactions here, but also to the magic that occurs when the brightest minds get to work together. This is where our true strength lies. In you. In your future colleagues and in all your differences. And of course, in your ideas to improve lives while preserving our planet’s beauty for the generations to come. Job Overview And Responsibilities This position will be based in Pune, India. As the AWS Cloud Engineer, you will be responsible for designing, implementing and optimizing scalable, resilient cloud infrastructure of Google Cloud and AWS platform. This role involves deploying, automating and maintaining cloud-based applications, services and tools to ensure high availability, security and performance. The ideal candidate will have in-depth knowledge of GCP and AWS services and architecture best practices, along with strong experience in infrastructure automation, monitoring and troubleshooting. We count on you for: Design and implement secure, scalable and highly available cloud infrastructure using GCP/Azure services, based on business and technical requirements Develop automated deployment pipelines using Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation and GCP/Azure CDK, ensuring efficient, repeatable and consistent infrastructure deployments Implement and manage security practices such as Identity and Access Management, network security and encryption to ensure data protection and compliance with industry standards and regulations Design and implement backup, disaster recovery and failover solutions for high availability and business continuity Create and maintain comprehensive documentation of infrastructure architecture, configuration and troubleshooting steps and share knowledge with team members Close collaboration with multi-cloud enterprise architect, DevOps solution architect, Cloud Operations Manager to ensure quick MVP prior to pushing into production Keep up to date with new GCP/Azure services, features and best practices, providing recommendations for process and architecture improvements Education and experience Bachelor's degree in Information Technology, Computer Science, Business Administration, or a related field. Master's degree or relevant certifications would be a plus. Minimum of 2-5 years of experience in cloud engineering, cloud architecture or infrastructure role Proven experience with GCP/Azure services, including EC2, S3, RDS, Lambda, VPC, IAM and CloudFormation Hands-on experience with Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation or GCP/Azure CDK Strong scripting skills in Python, Bash or PowerShell for automation tasks Familiarity with CI/CD tools (eg: Gitlab CI/CD, Jenkins) and experience integrating them with GCP/Azure Knowledge of networking fundamentals and experience with GCP/Azure VPC, security groups, VPN and routing Proficiency in monitoring and logging tools such as native cloud tools or third-party tools like Datadog and Splunk Cybersecurity Expertise: Understanding of cybersecurity principles, best practices, and frameworks. Knowledge of encryption, identity management, access controls, and other security measures within cloud environments. Preferably with certifications such as GCP/Azure Certified DevOps Engineer, GCP/Azure Certified SysOps Administrator, GCP/Azure Certified Solutions Architect Skills and behavioral competencies Excellent problem solving and troubleshooting abilities Result orientation, influence & impact Empowerment & accountability with the ability to work independently Team spirit, building relationships, collective accountability Excellent oral and written communication skills for documenting and sharing information with technical and non-technical stakeholders Language skills English mandatory What’s in it for the candidate Be part of and contribute to a once-in-a-lifetime change journey Join a dynamic team that is going to tackle big bets Have fun and work at a high pace About Us Syensqo is a science company developing groundbreaking solutions that enhance the way we live, work, travel and play. Inspired by the scientific councils which Ernest Solvay initiated in 1911, we bring great minds together to push the limits of science and innovation for the benefit of our customers, with a diverse, global team of more than 13,000 associates. Our solutions contribute to safer, cleaner, and more sustainable products found in homes, food and consumer goods, planes, cars, batteries, smart devices and health care applications. Our innovation power enables us to deliver on the ambition of a circular economy and explore breakthrough technologies that advance humanity. At Syensqo, we seek to promote unity and not uniformity. We value the diversity that individuals bring and we invite you to consider a future with us, regardless of background, age, gender, national origin, ethnicity, religion, sexual orientation, ability or identity. We encourage individuals who may require any assistance or accommodations to let us know to ensure a seamless application experience. We are here to support you throughout the application journey and want to ensure all candidates are treated equally. If you are unsure whether you meet all the criteria or qualifications listed in the job description, we still encourage you to apply. Show more Show less
Posted 5 days ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Role - Devops Admin Skill -Microservices Fundamentals Linux Shell Scripting Database Web Server and LB Problem-solving and Debugging Logging and Monitoring Education - BE/BTECH/MCA Location - Kolkata Experience - 2-4 yr JD: 1 Commissioning of Non-Prod & Prod Environments 2 Installation & Configuration of Servers and Application Software 3 Setup of Deployment Pipelines & Supporting tools 4 Training junior resources, documentation publishing and guidance to team 5 Ensuring uninterrupted operations in PROD & Non-Prod environments 6 Troubleshooting and resolution of issues with the given TAT 7 Version Control, Logging, Monitoring Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
Udaipur, Rajasthan, India
On-site
Job Summary We are looking for an experienced DevOps Lead to join our technology team and drive the design, implementation, and optimization of our DevOps processes and infrastructure. You will lead a team of engineers to ensure smooth CI/CD workflows, scalable cloud environments, and high availability for all deployed applications. This is a hands-on leadership role requiring a strong technical foundation and a collaborative mindset. Key Responsibilities Lead the DevOps team and define best practices for CI/CD pipelines, release management, and infrastructure automation. Design, implement, and maintain scalable infrastructure using tools such as Terraform, CloudFormation, or Ansible. Manage and optimize cloud services (e.g., AWS, Azure, GCP) for cost, performance, and security. Oversee monitoring, alerting, and logging systems (e.g., Prometheus, Grafana, ELK, Datadog). Implement and enforce security, compliance, and governance policies in cloud environments. Collaborate with development, QA, and product teams to ensure reliable and efficient software delivery. Lead incident response and root cause analysis for production issues. Evaluate new technologies and tools to improve system efficiency and reliability. Required Qualifications Bachelor's or master's degree in computer science, Engineering, or related field. 5+ years of experience in DevOps or SRE roles, with at least 2 years in a lead or managerial capacity. Strong experience with CI/CD tools (e.g., Jenkins, GitHub Actions, GitLab CI/CD). Expertise in infrastructure as code (IaC) and configuration management. Proficiency in scripting languages (e.g., Python, Bash). Deep knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes). Experience with version control (Git), artifact repositories, and deployment strategies (blue/green, canary). Solid understanding of networking, DNS, firewalls, and security protocols. Preferred Qualifications Certifications (e.g., Azure Certified DevOps Engineer, CKA/CKAD). Experience in a regulated environment (e.g., HIPAA, PCI, SOC2). Exposure to observability platforms and chaos engineering practices. Background in agile/scrum Skills : Strong leadership and team-building capabilities. Excellent problem-solving and troubleshooting skills. Clear and effective communication, both written and verbal. Ability to work under pressure and adapt quickly in a fast-paced environment. (ref:hirist.tech) Show more Show less
Posted 5 days ago
5.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Company: Our Client is a leading Indian multinational IT services and consulting firm. It provides digital transformation, cloud computing, data analytics, enterprise application integration, infrastructure management, and application development services. The company caters to over 700 clients across various industries, including banking and financial services, manufacturing, technology, media, retail, and travel and hospitality. Its industry-specific solutions are designed to address complex business challenges by combining domain expertise with deep technical capabilities. With a global workforce of over 80,000 professionals and a presence in more than 50 countries. Job Title: Python Developer Locations: PAN INDIA Experience: 5-10 Years (Relevant) Employment Type: Contract to Hire Work Mode: Work From Office Notice Period: Immediate to 15 Days JOB DESCRIPTION: Cloud Computing Proficiency in cloud platforms such as AWS, Google Cloud or Azure Containerization Experience with Docker and Kubernetes for container orchestration CICD Strong knowledge of continuous integration and continuous delivery processes using tools like Jenkins, GitLab CI, or Azure DevOps Infrastructure as Code IaC Experience with IaC tools such as Terraform or CloudFormation Scripting and Programming Proficiency in scripting languages, eg, Python, Bash, and programming languages, eg, Java, Go Monitoring and Logging Familiarity with monitoring tools eg, Prometheus, Grafana, and logging tools, eg, ELK stack Security Knowledge of security best practices and tools for securing platforms and data Networking: Understanding of networking concepts and technologies Database Management Experience with both SQL and No-SQL databases Automation Proficiency in automation tools and frameworks Version Control Strong knowledge of version control systems like Git Development Understanding, Solid understanding of the software development life cycle (SDLC), and experience working closely with development teams Mandatory Skills: Azure API Management, Azure BLOB, Azure Cloud Architecture, Azure Container Apps, Azure Cosmos DB, Azure DevOps, Azure Event Grid, Azure Functions, Azure IOT, Docker, Kubernetes Show more Show less
Posted 5 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Required Information Details 1 Role Linux TSR (Technical Support Resident engineer) 2 Required Technical Skill Set L2 Linux, L1-L2 DevOps, CloudOps 4 Desired Experience Range 3 to 5 years 5 Location of Requirement Chennai, Ahmedabad (Gandhi Nagar) Desired Competencies (Technical/Behavioral Competency) Must-Have (Ideally should be at least 3 years of hands-on with Linux) · Compute: Demonstrates a deep understanding of compute concepts, including virtualization, operating systems, system administration, performance, networking, and troubleshooting. · Web Technologies: Experience with web technologies and protocols (HTTP/HTTPS, REST APIs, SSL/TLS) and experience troubleshooting web application issues. · Operating Systems: Strong proficiency in Linux (e.g., RHEL, CentOS, Ubuntu) system administration. Experience with Windows Server administration is a plus. · GCP Proficiency: Experience with GCP core services, particularly Compute Engine, Networking (VPC, Subnets, Firewalls, Load Balancing), Storage (Cloud Storage, Persistent Disk), and related services within the Compute domain. · Security: Solid understanding of security best practices for securing compute resources in cloud environments, including IAM implementation, access control, vulnerability management, and protection against unauthorized access and data exfiltration. · Monitoring and Logging: Experience with monitoring tools for troubleshooting and performance analysis. · Scripting and Automation: Proficiency in scripting (e.g., Bash, Python, PowerShell) for system administration, automation, and API interaction. Experience with automation tools (e.g., Terraform, Ansible, Jenkins, Cloud Build) is essential · Networking: Solid understanding of networking concepts and protocols (TCP/IP, DNS, BGP, routing, load balancing) and experience troubleshooting network issues. · Problem-Solving Skills: Excellent analytical and problem-solving skills with the ability to identify and resolve complex technical issues. · Communication Skills: Strong communication and collaboration skills with the ability to effectively communicate technical concepts to both technical and non-technical audiences. Good-to-Have ● Minimum of 3+ years’ experience in implementing both on-premises and cloud-based infrastructure ● Certifications related to Linux/ Cloud Ops, preferably Associate Cloud Engineer Show more Show less
Posted 5 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title :Performance Tester Key Skills :AWS,Jmeter,AppDynamics, New Relic, Splunk, DataDog Job Locations :Chennai,Pune Experience : 65-7 Education Qualification : Any Graduation. Work Mode : Hybrid. Employment Type : Contract. Notice Period : Immediate Job Description: Experience, Skills and Qualifications: • Performance Engineering, testing and tuning cloud hosted digital platforms (e.g. AWS) • Working knowledge (preferably with an AWS Solutions Architect certification) on Cloud Platforms like AWS and AWS Key Services and DevOps tools like CloudFormation, Teraform • Performance engineering and testing of web Apps (Linux) Performance testing and tuning web-based applications • Performance engineering toolsets such as JMeter, Microfocus Performance Center, BrowserStack, Taurus, Lighthouse, • Monitoring/logging tools (such as AppDynamics, New Relic, Splunk, DataDog) • Windows / UNIX / Linux / Web / Database / Network performance monitors to diagnose performance issues along with JVM tuning and Heap analysing skills • Docker, Kubernetes and Cloud-native development and container orchestration frameworks, Kubernetes clusters, pods & nodes, vertical/horizontal pod autoscaling concepts, High availability • Performance Testing and Engineering activity planning, estimating, designing, executing and analysing output from performance tests • Working in an agile environment, "DevOps" team or a similar multi-skilled team in a technically demanding function • Jenkins and CI-CD Pipelines including Pipeline scripting • Chaos engineering using tools like Chaos toolkit, AWS Fault Injection Simulator, Gremlin etc. • Programming and scripting language skills in Java, Shell, Scala, Groovy, Python and knowledge of security mechanisms such as OAuth etc. • Tools like GitHub, Jira & Confluence • Assisting with Resiliency Production support teams and Performance Incident Root Cause Analysis • Ability to prioritize work effectively and deliver within agreed service levels in a diverse and ever-changing environment • High levels of judgment and decision making, being able to rationalize and present the background and reasoning for direction taken • Strong stakeholder management and excellent communication skills. • Extensive knowledge of risk management and mitigation • Strong analytical and problem-solving skills Show more Show less
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The logging job market in India is vibrant and offers a wide range of opportunities for job seekers interested in this field. Logging professionals are in demand across various industries such as IT, construction, forestry, and environmental management. If you are considering a career in logging, this article will provide you with valuable insights into the job market, salary range, career progression, related skills, and common interview questions.
These cities are known for their thriving industries where logging professionals are actively recruited.
The average salary range for logging professionals in India varies based on experience and expertise. Entry-level positions typically start at INR 3-5 lakhs per annum, while experienced professionals can earn upwards of INR 10-15 lakhs per annum.
A typical career path in logging may include roles such as Logging Engineer, Logging Supervisor, Logging Manager, and Logging Director. Professionals may progress from entry-level positions to more senior roles such as Lead Logging Engineer or Logging Consultant.
In addition to logging expertise, employers often look for professionals with skills such as data analysis, problem-solving, project management, and communication skills. Knowledge of industry-specific software and tools may also be beneficial.
As you embark on your journey to explore logging jobs in India, remember to prepare thoroughly for interviews by honing your technical skills and understanding industry best practices. With the right preparation and confidence, you can land a rewarding career in logging that aligns with your professional goals. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.