Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
India
Remote
Job Title: AWS Lead Engineer Location: Remote Employment Type: Full-time About the Role: We are seeking an AWS DevOps Engineer to design, deploy, and optimize a real-time data streaming platform on AWS. You will work with cutting-edge cloud technologies, ensuring scalability, security, and high performance using Kubernetes, Terraform, CI/CD, and monitoring tools . Key Responsibilities: β Design & maintain AWS-based streaming solutions (Lambda, S3, RDS, VPC) β Manage Kubernetes (EKS) β Helm, ArgoCD, IRSA β Implement Infrastructure as Code (Terraform) β Automate CI/CD pipelines ( GitHub Actions ) β Monitor & troubleshoot using Datadog/Splunk β Ensure security best practices ( Snyk, Sonar Cloud ) β Collaborate with teams to integrate data products Must-Have Skills: πΉ AWS (IAM, Lambda, S3, VPC, CloudWatch) πΉ Kubernetes (EKS) & Helm/ArgoCD πΉ Terraform (IaC) πΉ CI/CD (GitHub Actions) πΉ Datadog/Splunk Monitoring πΉ Docker & Python/Go Scripting Nice-to-Have: πΈ AWS Certifications (DevOps/Solutions Architect) πΈ Splunk/SDLC experience Why Join Us? Work with modern cloud & DevOps tools Collaborative & innovative team Growth opportunities in AWS & DevOps Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Role Proficiency: Act under guidance of Lead II/Architect understands customer requirements and translate them into design of new DevOps (CI/CD) components. Capable of managing at least 1 Agile Team Outcomes: Interprets the DevOps Tool/feature/component design to develop/support the same in accordance with specifications Adapts existing DevOps solutions and creates own DevOps solutions for new contexts Codes debugs tests documents and communicates DevOps development stages/status of DevOps develop/support issues Select appropriate technical options for development such as reusing improving or reconfiguration of existing components Optimises efficiency cost and quality of DevOps process tools and technology development Validates results with user representatives; integrates and commissions the overall solution Helps Engineers troubleshoot issues that are novel/complex and are not covered by SOPs Design install configure troubleshoot CI/CD pipelines and software Able to automate infrastructure provisioning on cloud/in-premises with the guidance of architects Provides guidance to DevOps Engineers so that they can support existing components Work with diverse teams with Agile methodologies Facilitate saving measures through automation Mentors A1 and A2 resources Involved in the Code Review of the team Measures Of Outcomes: Quality of deliverables Error rate/completion rate at various stages of SDLC/PDLC # of components/reused # of domain/technology certification/ product certification obtained SLA for onboarding and supporting users and tickets Outputs Expected: Automated components : Deliver components that automat parts to install components/configure of software/tools in on premises and on cloud Deliver components that automate parts of the build/deploy for applications Configured Components: Configure a CI/CD pipeline that can be used by application development/support teams Scripts: Develop/Support scripts (like Powershell/Shell/Python scripts) that automate installation/configuration/build/deployment tasks Onboard Users: Onboard and extend existing tools to new app dev/support teams Mentoring: Mentor and provide guidance to peers Stakeholder Management: Guide the team in preparing status updates keeping management updated about the status Training/SOPs : Create Training plans/SOPs to help DevOps Engineers with DevOps activities and in onboarding users Measure Process Efficiency/Effectiveness: Measure and pay attention to efficiency/effectiveness of current process and make changes to make them more efficiently and effectively Stakeholder Management: Share the status report with higher stakeholder Skill Examples: Experience in the design installation configuration and troubleshooting of CI/CD pipelines and software using Jenkins/Bamboo/Ansible/Puppet /Chef/PowerShell /Docker/Kubernetes Experience in Integrating with code quality/test analysis tools like Sonarqube/Cobertura/Clover Experience in Integrating build/deploy pipelines with test automation tools like Selenium/Junit/NUnit Experience in Scripting skills (Python/Linux/Shell/Perl/Groovy/PowerShell) Experience in Infrastructure automation skill (ansible/puppet/Chef/Powershell) Experience in repository Management/Migration Automation β GIT/BitBucket/GitHub/Clearcase Experience in build automation scripts β Maven/Ant Experience in Artefact repository management β Nexus/Artifactory Experience in Dashboard Management & Automation- ELK/Splunk Experience in configuration of cloud infrastructure (AWS/Azure/Google) Experience in Migration of applications from on-premises to cloud infrastructures Experience in Working on Azure DevOps/ARM (Azure Resource Manager)/DSC (Desired State Configuration)/Strong debugging skill in C#/C Sharp and Dotnet Setting and Managing Jira projects and Git/Bitbucket repositories Skilled in containerization tools like Docker/Kubernetes Knowledge Examples: Knowledge of Installation/Config/Build/Deploy processes and tools Knowledge of IAAS - Cloud providers (AWS/Azure/Google etc.) and their tool sets Knowledge of the application development lifecycle Knowledge of Quality Assurance processes Knowledge of Quality Automation processes and tools Knowledge of multiple tool stacks not just one Knowledge of Build Branching/Merging Knowledge about containerization Knowledge on security policies and tools Knowledge of Agile methodologies Additional Comments: Experience preferred: 5+ Years Language: Must have expert knowledge of either Go or Java and have some knowledge of two others. Go Java Python C programming & Golang(Basic knowledge) Infra: Brokers: Must have some experience and preferably mastery in at least one product. We use RabbitMQ and MQTT (Mosquitto). Prefer experience with edge deployments of brokers because the design perspective is different when it comes to persistence, hardware, and telemetry Linux Shell/Scripting Docker Kubernetes k8s β Prefer experience with Edge deployments, must have some mastery in this area or in Docker K3s (nice-to-have) Tooling: Gitlab CI/CD Automation Dashboard building β In any system, someone who can take raw data and make something presentable and usable for production support Nice to have: Ansible Terraform Responsibilities: KTLO activities for existing RabbitMQ and MQTT instances including annual PCI, patching and upgrades, monitoring library upgrades of applications, production support, etc. Project work for RabbitMQ and MQTT instances including: Library enhancements - In multiple languages Security enhancements β Right now, we are setting up the hardened cluster including all of the security requested changes - Telemetry, monitoring, dashboarding, reporting. Skills Java,Devops,Rabbitmq Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title - S&C Global Network - AI - CDP - Marketing Analytics - Analyst Management Level: 11-Analyst Location: Bengaluru, BDC7C Must-have skills: Data Analytics Good to have skills: Ability to leverage design thinking, business process optimization, and stakeholder management skills. Job Summary: This role involves driving strategic initiatives, managing business transformations, and leveraging industry expertise to create value-driven solutions. Roles & Responsibilities: Provide strategic advisory services, conduct market research, and develop data-driven recommendations to enhance business performance. WHATβS IN IT FOR YOU? As part of our Analytics practice, you will join a worldwide network of over 20k+ smart and driven colleagues experienced in leading AI/ML/Statistical tools, methods and applications. From data to analytics and insights to actions, our forward-thinking consultants provide analytically-informed, issue-based insights at scale to help our clients improve outcomes and achieve high performance. What You Would Do In This Role A Consultant/Manager for Customer Data Platforms serves as the day-to-day marketing technology point of contact and helps our clients get value out of their investment into a Customer Data Platform (CDP) by developing a strategic roadmap focused on personalized activation. You will be working with a multidisciplinary team of Solution Architects, Data Engineers, Data Scientists, and Digital Marketers. Key Duties and Responsibilities: Be a platform expert in one or more leading CDP solutions. Developer level expertise on Lytics, Segment, Adobe Experience Platform, Amperity, Tealium, Treasure Data etc. Including custom build CDPs Deep developer level expertise for real time even tracking for web analytics e.g., Google Tag Manager, Adobe Launch etc. Provide deep domain expertise in our clientβs business and broad knowledge of digital marketing together with a Marketing Strategist industry Deep expert level knowledge of GA360/GA4, Adobe Analytics, Google Ads, DV360, Campaign Manager, Facebook Ads Manager, The Trading desk etc. Assess and audit the current state of a clientβs marketing technology stack (MarTech) including data infrastructure, ad platforms and data security policies together with a solutions architect. Conduct stakeholder interviews and gather business requirements Translate business requirements into BRDs, CDP customer analytics use cases, structure technical solution Prioritize CDP use cases together with the client. Create a strategic CDP roadmap focused on data driven marketing activation. Work with the Solution Architect to strategize, architect, and document a scalable CDP implementation, tailored to the clientβs needs. Provide hands-on support and platform training for our clients. Data processing, data engineer and data schema/models expertise for CDPs to work on data models, unification logic etc. Work with Business Analysts, Data Architects, Technical Architects, DBAs to achieve project objectives - delivery dates, quality objectives etc. Business intelligence expertise for insights, actionable recommendations. Project management expertise for sprint planning Professional & Technical Skills: Relevant experience in the required domain. Strong analytical, problem-solving, and communication skills. Ability to work in a fast-paced, dynamic environment. Strong understanding of data governance and compliance (i.e. PII, PHI, GDPR, CCPA) Experience with analytics tools like Google Analytics or Adobe Analytics is a plus. Experience with A/B testing tools is a plus. Must have programming experience in PySpark, Python, Shell Scripts. RDBMS, TSQL, NoSQL experience is must. Manage large volumes of structured and unstructured data, extract & clean data to make it amenable for analysis. Experience in deployment and operationalizing the code is an added advantage. Experience with source control systems such as Git, Bitbucket, and Jenkins build and continuous integration tools. Proficient in Excel, MS word, PowerPoint, etc Technical Skills: Any CDP platforms experience e.g., Lytics CDP platform developer, or/and Segment CDP platform developer, or/and Adobe Experience Platform (Real time β CDP) developer, or/and Custom CDP developer on any cloud GA4/GA360, or/and Adobe Analytics Google Tag Manager, and/or Adobe Launch, and/or any Tag Manager Tool Google Ads, DV360, Campaign Manager, Facebook Ads Manager, The Trading desk etc. Deep Cloud experiecne (GCP, AWS, Azure) Advance level Python, SQL, Shell Scripting experience Data Migration, DevOps, MLOps, Terraform Script programmer Soft Skills: Strong problem solving skills Good team player Attention to details Good communication skills Additional Information: Opportunity to work on innovative projects. Career growth and leadership exposure. About Our Company | Accenture Experience: 3-5Years Educational Qualification: Any Degree Show more Show less
Posted 1 day ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title : ETL Deeveloper Key Skills : DataStage (ETL), R language (Must have), Linux scripting, SQL, Control-M, GCP knowledge Job Locations : Pune Experience : 4 - 6 Education Qualification : Any Graduation Work Mode : Hybrid Employment Type : Contract Notice Period : Immediate - 10 Days Payroll : people prime Worldwide Job description: The candidate should have 8 yrs and above exp in ETL development. Understanding of data modeling concepts. Passionate about sophisticated data structures and problem solutions. Quickly learn new data tools and ideas Proficient in skills - DataStage (ETL), R language (Must have), Linux scripting, SQL, Control-M, GCP knowledge would be an added advantage. The candidate should be well aware of Agile ways of working Knowledge of different SQL/NoSQL data storage techniques and Big Data technologies Show more Show less
Posted 1 day ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description: Position - Security Incident Responder Exp. - 4+ Years Location - Gurgaon ( 5 Days WFO ) Apply Here : https://forms.gle/1PVR9KTHvRaeMBuj8 Snowbit is looking for an experienced Security Incident Responder to join our Managed Detection and Response (MDR) team. This role requires expertise in incident response, threat hunting, and forensic investigations, with a strong emphasis on cloud environments and Kubernetes. You will lead efforts to protect our customers from advanced cyber threats while contributing to the continuous improvement of Snowbitβs methodologies, processes, and technology stack. What Youβll Do: Leverage Snowbitβs advanced MDR platform to lead large-scale incident response investigations and proactive threat-hunting initiatives. Conduct log analysis, and cloud artifact reviews using EDR and similar tools depending on availability, to support incident resolution and root-cause investigations. Investigate and respond to security incidents in containerized environments, with a specific focus on Kubernetes security and architecture. Research evolving cyberattack tactics, techniques, and procedures (TTPs) to strengthen customer defenses and codify insights for our services. Provide technical and executive briefings to customers, including recommendations to mitigate risk and enhance cybersecurity posture. Collaborate with internal teams, including engineering and research, to enhance Snowbitβs MDR and incident response capabilities. Partner with customer teams (IT, DevOps, and Security) to ensure seamless integration and adoption of Snowbitβs MDR services. Share expertise through presentations, research publications, and participation in the global cybersecurity community. Experience: 3-5 years in incident response, threat hunting with strong experience in cloud security (AWS, Azure, GCP) and Kubernetes environments. Proven Incident response experience in complex environments. Technical Skills: Demonstrates strong expertise in understanding adversary tactics and techniques, translating them into actionable investigation tasks, conducting in-depth analysis, and accurately assessing the impact. Familiarity with attack vectors, malware families, and campaigns. Deep understanding of network architecture, protocols, and operating system internals (Windows, Linux, Unix). Expertise in Kubernetes security, including container orchestration, workload isolation, and cluster hardening. Experience securing Kubernetes infrastructure, runtime security, and security monitoring. Problem-Solving: Ability to work independently and collaboratively in dynamic, fast-paced environments. Communication: Excellent written and verbal communication skills to interact with technical and non-technical stakeholders. Preferred Skills: Scripting skills (e.g., Python, PowerShell) Experience with Red Team operations, penetration testing, or cyber operations. Hands-on knowledge of attack frameworks (e.g., MITRE ATT&CK, Metasploit, Cobalt Strike). Proficiency in host forensics, memory forensics, and malware analysis. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Gurugram, Haryana, India
Remote
Role: Spotfire Consultant Location: Remote Duration: Full Time Key Responsibilities: β’ Develop, automate and optimize Spotfire dashboards for data visualization and analysis. β’ Connect and integrate SQL databases with Spotfire, including writing and optimizing queries. β’ Optimize data mapping for efficient queries and seamless Spotfire integration. β’ Work with large datasets to ensure efficient performance in Spotfire. β’ Customize dashboards using IronPython scripting and HTML/CSS/Javascript. β’ Collaborate with internal teams to translate business requirements into actionable insights. β’ Troubleshoot performance issues and recommend best practices for data visualization. β’ Required Skills & Experience: β’ Strong experience with Spotfire Analyst for data visualization and analytics. β’ Proficiency in SQL (writing queries, stored procedures, and performance tuning). β’ Familiarity with database management systems (e.g., Snowflake, SQL Server, Oracle, PostgreSQL, MySQL). β’ Experience with HTML, JavaScript, and IronPython for dashboard customization. β’ Ability to work independently and communicate effectively with stakeholders. Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Title: DevOps Engineer β Tooling & Infrastructure Support Experience: 5β7 Years Location: Onsite India (Work from Office): Noida / Gurgaon / Mumbai About the Role: We are seeking an experienced DevOps Engineer to drive automation, infrastructure provisioning, and operational excellence across development and QA environments. This role requires deep expertise in CI/CD tooling, system administration, and configuration management. You will work closely with cross-functional teams to enable scalable, reliable, and secure engineering workflows, with a strong focus on tooling, observability, and deployment automation. Key Responsibilities: Infrastructure Automation & CI/CD Automate infrastructure provisioning using tools like Terraform and Ansible Design, implement, and maintain robust CI/CD pipelines with Jenkins and GitLab Drive infrastructure as code practices and ensure version-controlled deployment workflows System Management & Environment Support Manage Linux-based systems, including scripting and server maintenance Support engineering and QA environments across development, staging, and production Perform environment health checks and manage capacity planning Containerization & Observability Containerize applications using Docker and manage lifecycle across environments Set up monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack) Maintain alerting systems to ensure availability, performance, and fault tolerance Collaboration & Integration Work with backend engineers and QA teams to align on deployment strategies Support rollouts, rollback procedures, and release validation processes Contribute to improving operational processes, runbooks, and incident response Continuous Improvement Ensure system reliability, scalability, and security best practices Participate in retrospectives, root cause analysis, and process automation initiatives Document infrastructure architecture, workflows, and troubleshooting guides Core Skill Set: Strong hands-on experience with Jenkins , GitLab , Docker , Terraform , and Ansible Proficiency in Linux scripting and system administration Solid understanding of CI/CD pipeline design and deployment automation Experience setting up monitoring and observability tools Familiarity with infrastructure management at scale Ability to work in a US-overlap time zone and support distributed teams Preferred Qualifications: Bachelorβs degree in Computer Science, Information Technology, or a related field 5β7 years of experience in DevOps or SRE roles Experience with production-grade infrastructure in cloud or hybrid environments Exposure to large-scale infrastructure migrations Strong communication and collaboration skills across technical teams Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
π Weβre Hiring: Data Analyst β Sales & Marketing Operations π Working Days : Monday to Friday (Morning Shift) π« Note : No cabs/travel provided π° Salary : Up to βΉ7.5 LPA (Max. 35% hike on the last drawn CTC) About the Role We are seeking a highly analytical and detail-oriented Data Analyst to own the complete data management lifecycle and drive data-driven decision-making across Sales, Marketing, and Business teams. The ideal candidate will have a deep understanding of CRM systems, data hygiene practices, and marketing automation tools. Key Responsibilities: Own the end-to-end data lifecycle β hygiene, cleansing, validation, enrichment, and integration. Collaborate with Sales, Marketing, IT, and Conference teams to translate data into actionable insights. Ensure CRM data integrity in tools like HubSpot, Salesforce, Zoho . Define and manage SOPs for data acquisition and enrichment. Build reporting and dashboards using Tableau, Power BI, or Google Data Studio . Conduct market research and maintain high-quality lead databases. Use tools like LinkedIn Sales Navigator, Zoominfo, Apollo.io, Hoovers for intelligence gathering. Automate data tasks using SQL, Python, or other scripting tools . Maintain compliance with data privacy regulations (e.g., GDPR, CCPA). Drive continuous improvement through performance monitoring and insight generation. Manage drip campaigns and segmentation logic in Mailchimp, Zoho Campaigns, or HubSpot . Who You Are: 5+ years of experience in data analytics, ideally in sales/marketing operations. Proficient in Advanced Excel, SQL, Tableau/Power BI ; familiarity with HTML/CSS. Hands-on experience with HubSpot, Salesforce, Zoho CRM . Exposure to MySQL , with Python or R being a strong plus. Experienced in managing timelines and collaborating across teams. Strong problem-solving skills and a data-driven mindset. Knowledge of data governance and compliance best practices. Bonus: Exposure to machine learning or predictive analytics. Why Join Us? 5-day work week β Monday to Friday (Fixed Sat-Sun off) Work with a collaborative and data-forward team Opportunity to lead cross-functional data initiatives Competitive compensation up to βΉ7.5 LPA Show more Show less
Posted 1 day ago
0.0 - 2.0 years
0 Lacs
Mohali, Punjab
On-site
The Role As a DevOps Engineer , you will be an integral part of the product and service division, working closely with development teams to ensure seamless deployment, scalability, and reliability of our infrastructure. You'll help build and maintain CI/CD pipelines, manage cloud infrastructure, and contribute to system automation. Your work will directly impact the performance and uptime of our flagship product, BotPenguin. What you need for this role Education: Bachelor's degree in Computer Science, IT, or a related field. Experience: 2-5 years in DevOps or similar roles. Technical Skills: Proficiency in CI/CD tools like Jenkins, GitLab CI, or GitHub Actions. Experience with containerization and orchestration using Docker and Kubernetes. Strong understanding of cloud platforms, especially AWS & Azure. Familiarity with infrastructure as code tools such as Terraform or CloudFormation. Knowledge of monitoring and logging tools like Prometheus, Grafana, and ELK Stack. Good scripting skills in Bash, Python, or similar languages. Soft Skills: Detail-oriented with a focus on automation and efficiency. Strong problem-solving abilities and proactive mindset. Effective communication and collaboration skills. What you will be doing Build, maintain, and optimize CI/CD pipelines. Monitor and improve system performance, uptime, and scalability. Manage and automate cloud infrastructure deployments. Work closely with developers to support release processes and environments. Implement security best practices in deployment and infrastructure management. Ensure high availability and reliability of services. Document procedures and provide support for technical troubleshooting. Contribute to training junior team members, and assist HR and operations teams with tech-related concerns as required. Top reasons to work with us Be part of a cutting-edge AI startup driving innovation in chatbot automation. Work with a passionate and talented team that values knowledge-sharing and problem-solving. Growth-oriented environment with ample learning opportunities. Exposure to top-tier global clients and projects with real-world impact. A culture that fosters creativity, ownership, and collaboration. Job Type: Full-time Pay: βΉ400,000.00 - βΉ800,000.00 per year Benefits: Flexible schedule Health insurance Leave encashment Provident Fund Schedule: Day shift Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: DevOps: 2 years (Required) Work Location: In person
Posted 1 day ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role: Devops Engineer / Java Experience: 4-6 Years Job location β Noida with 03 months of on-site training in Singapore Functional Area: Java + DevOps + Kubernetes + AWS/cloud JOB DESCRIPTION: About the Role: We are seeking a highly motivated DevOps Engineer to join our team and play a pivotal role in building and maintaining our cloud infrastructure. The ideal candidate will have a strong understanding of DevOps principles and practices, with a focus on AWS, Kubernetes, CI/CD pipelines, Docker, and Terraform. Working on a Java scripting Language is must for design, development. Responsibilities: β’ Cloud Platforms: Design, build, and maintain our cloud infrastructure primarily on AWS. β’ Infrastructure as Code (IaC): Develop and manage IaC solutions using tools like Terraform to provision and configure cloud resources on AWS. β’ Containerization: Implement and manage Docker containers and Kubernetes clusters for efficient application deployment and scaling. β’ CI/CD Pipelines: Develop and maintain automated CI/CD pipelines using tools like Jenkins, Bitbucket CI/CD, or ArgoCD to streamline software delivery. Automation: Automate infrastructure provisioning, configuration management, and application deployment using tools like Terraform and Ansible. β’ Monitoring and Troubleshooting: Implement robust monitoring and alerting systems to proactively identify and resolve issues. β’ Collaboration: Work closely with development teams to understand their needs and provide solutions that align with business objectives. β’ Security: Ensure compliance with security best practices and implement measures to protect our infrastructure and applications. Qualifications: β’ Bachelorβs degree in computer science, Engineering, or a related field. β’ Strong proficiency in AWS services (EC2, S3, VPC, IAM, etc.). β’ Experience with Kubernetes and container orchestration. β’ Expertise in Java coding, CI/CD pipelines and tools (Jenkins, Bitbucket CI/CD, ArgoCD). β’ Familiarity with Docker and containerization concepts. β’ Experience with configuration management tools (Terraform, Cloudformation). β’ Scripting skills (Java, Python, Bash). β’ Understanding of networking and security concepts Show more Show less
Posted 1 day ago
6.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for ITX/WTX Developer Email : ragavisaravanan@akrossit.com Experience : 6-10 years Location : Offshore (Delhi NCR, Chennai, Bangalore, Kochi) Mandatory Keywords : Healthcare Domain Must-Have Java Spring boot, Microservice, Backend services experience is Must Proficiency in IBM Transformation Extender (ITX) tools and components (e.g., ITX Designer, ITX Toolkit, ITX runtime environment). Strong understanding of data transformation processes, mapping, and routing using ITX. Expertise in troubleshooting and resolving complex ITX job failures and performance issues. DataPower Exp Experience with SQL database integration, connect to MQ,etc. Experience with automation and scripting languages (e.g., Python, Bash, PowerShell) for managing ITX jobs and processes. Good-to-Have Identifies impact analysis and provides resolution or escalation. Creates reports and share them with Stakeholders Result-focused, team-oriented professional with a strong work ethic. Strong verbal communication skills with the ability to present technical details to a non-technical audience and prepare clear and concise written documentation DataPower Experience Expectations from the Role Provide day-to-day support for IBM Transformation Extender (ITX) jobs and data transformation processes. Troubleshoot and resolve issues related to data transformation, message flows, and ITX jobs.Perform root cause analysis for failed ITX jobs, ensuring the timely identification and resolution of problems. Troubleshoot and resolve integration-related issues, including data inconsistencies and transformation errors between ITX and other applications. Optimize ITX jobs to enhance their performance, reduce processing times, and minimize resource consumption. Patch management, and maintenance tasks for ITX and related systems. Coordinate with other teams to move ITX jobs through various development stages, ensuring proper testing and quality assurance. Support integration efforts between ITX and other middleware and data systems (e.g., message queues, databases, and other data sources). Document troubleshooting steps and best practices to aid in faster issue resolution and knowledge sharing. Stay updated with the latest IBM technologies and industry trends, and apply them to projects when appropriate. Provide technical guidance and mentorship to junior developers. Show more Show less
Posted 1 day ago
0.0 - 1.0 years
0 Lacs
Mohali, Punjab
On-site
The Role As a Product Analyst , you will play a critical role in helping us build data-driven, user-centric features on the BotPenguin platform. You will work closely with Product Managers, Design, Engineering, Marketing, and Customer Success Teams to analyze user behavior, validate feature performance, and uncover growth opportunities through actionable insights. This is an exciting opportunity to join a high-growth product team and influence strategic decisions at the intersection of data, product design, and customer experience. What you need for this role Education: Bachelorβs degree in Computer Science, Business Analytics, Engineering, Statistics, or related field. Experience: 2β5 years of experience in a product or data analyst role within a SaaS or tech product environment. Technical Skills: Strong expertise in MongoDB and data visualization tools (e.g., Tableau, Power BI, Metabase). Familiarity with Google Analytics, Mixpanel, Hotjar, GA4, Amplitude, or other product analytics platforms. Hands-on experience working with Excel/Google Sheets, building dashboards, and extracting user insights. Knowledge of product lifecycle, user funnels, A/B testing, and cohort analysis. Bonus: Exposure to Python, R, or basic scripting for data processing. Soft Skills: Excellent analytical and problem-solving skills. Strong communication and storytelling abilitiesβable to translate data into strategic insights. Proactive attitude with a willingness to own initiatives and drive improvements. Keen interest in product design, user experience, and tech innovation. What you will be doing Collaborate with Product Managers to define key metrics, success criteria, and feature adoption benchmarks. Analyze platform usage, customer behavior, and market data to discover pain points and opportunity areas. Generate and maintain weekly/monthly product reports and dashboards for cross-functional teams. Design and evaluate A/B tests, feature rollouts, and experiments to improve user engagement and retention. Work with the Engineering team to ensure accurate data tracking and event instrumentation. Monitor product KPIs and proactively raise red flags for anomalies or unexpected trends. Participate in roadmap discussions, contributing insights backed by data. Assist in user segmentation and support marketing and CS teams with insights for personalized communication and retention strategies. Assist on any other related to the product development or management if required. Top reasons to work with us Lead the architecture and evolution of a fast-growing AI product used globally. Be part of a cutting-edge AI startup driving innovation in chatbot automation. Work with a passionate and talented team that values knowledge-sharing and problem-solving. Growth-oriented environment with ample learning opportunities. Exposure to top-tier global clients and projects with real-world impact. Flexible work hours and an emphasis on work-life balance. A culture that fosters creativity, ownership, and collaboration. Job Type: Full-time Pay: βΉ400,000.00 - βΉ800,000.00 per year Benefits: Flexible schedule Health insurance Provident Fund Schedule: Day shift Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: Mixpanel: 1 year (Required) Amplitude: 1 year (Required) Hotjar: 1 year (Required) SaaS: 1 year (Required) heatmap/session replay tools: 1 year (Required) Work Location: In person
Posted 1 day ago
5.0 years
0 Lacs
Haryana, India
On-site
About Us Empowering Businesses with the Right Technology Solutions Are you ready to partner with Starlly for your projects? Streamlining post-sales service management with Servy Empowering businesses with seamless IoT integration through Spectra Moving from legacy systems to digitisation or modern technology stacks Expert consultation on solution design and architecture Kubernetes Admin What You Can Expect from Us: we work hard to provide our team with the best opportunities to grow their careers. You can expect to be a pioneer of ideas, a student of innovation, and a leader of thought. Innovation and thought leadership is at the center of everything we do, at all levels of the company. Letβs make your career great! Position Overview: We are looking for a Mid-Level Kubernetes Administrator to support and maintain our on-premises container orchestration infrastructure built on open-source Rancher Kubernetes. This role will focus on day-to-day cluster operations, deployment support, and working closely with DevOps, Infra, and Application teams. Roles And Responsibilities Manage Rancher-based Kubernetes clusters in an on-premise environment. Deploy and monitor containerized applications using Helm and Rancher UI/CLI. Support pod scheduling, resource allocation, and namespace management. Handle basic troubleshooting of workloads, networking, and storage issues. Monitor and report cluster health using Prometheus, Grafana, or similar tools. Manage users, roles, and access using Rancher-integrated RBAC. Participate in system patching, cluster upgrades, and capacity planning. Document standard operating procedures, deployment guides, and issue resolutions. Location: Gurugaon / Onsite Requirements Must Have Skills: 4β5 years of experience in Kubernetes administration in on-prem environments. Hands-on experience with Rancher for managing K8s clusters. Working knowledge of Linux system administration and networking. Experience in Docker, Helm, and basic YAML scripting. Exposure to CI/CD pipelines and Git-based deployment workflows. Experience with monitoring/logging stacks (Prometheus, Grafana). Familiarity with RKE (Rancher Kubernetes Engine). Good to Have Skills: Certified Kubernetes Administrator (CKA). Experience with bare metal provisioning, VM infrastructure, or storage systems. Soft Skills: Leadership in operational excellence and incident management. Strong communication with cross-functional teams and stakeholders. Ability to manage critical incidents and mentor junior engineers. Qualification BE/BTech/MCA/ME/MTech/MS in Computer Science or a related technical field or equivalent practical experience. Show more Show less
Posted 1 day ago
0.0 - 5.0 years
0 Lacs
Mohali, Punjab
On-site
The Role: As a Product Analyst , you will play a critical role in helping us build data-driven, user-centric features on the BotPenguin platform. You will work closely with Product Managers, Design, Engineering, Marketing, and Customer Success Teams to analyze user behavior, validate feature performance, and uncover growth opportunities through actionable insights. This is an exciting opportunity to join a high-growth product team and influence strategic decisions at the intersection of data, product design, and customer experience. What you need for this role: Education: Bachelorβs degree in Computer Science, Business Analytics, Engineering, Statistics, or related field. Experience: 2-5 years of experience in a product or data analyst role within a SaaS or tech product environment. Technical Skills: Strong expertise in MongoDB and data visualization tools (e.g., Tableau, Power BI, Metabase). Familiarity with Google Analytics, Mixpanel, Hotjar, or other product analytics platforms. Hands-on experience working with Excel/Google Sheets, building dashboards, and extracting user insights. Knowledge of product lifecycle, user funnels, A/B testing, and cohort analysis. Bonus: Exposure to Python, R, or basic scripting for data processing. Soft Skills: Excellent analytical and problem-solving skills. Strong communication and storytelling abilitiesβable to translate data into strategic insights. Proactive attitude with a willingness to own initiatives and drive improvements. Keen interest in product design, user experience, and tech innovation. What you will be doing: Collaborate with Product Managers to define key metrics, success criteria, and feature adoption benchmarks. Analyze platform usage, customer behavior, and market data to discover pain points and opportunity areas. Generate and maintain weekly/monthly product reports and dashboards for cross-functional teams. Design and evaluate A/B tests, feature rollouts, and experiments to improve user engagement and retention. Work with the Engineering team to ensure accurate data tracking and event instrumentation. Monitor product KPIs and proactively raise red flags for anomalies or unexpected trends. Participate in roadmap discussions, contributing insights backed by data. Assist in user segmentation and support marketing and CS teams with insights for personalized communication and retention strategies. Assist on any other related to the product development or management if required. Top reasons to work with us: Lead the architecture and evolution of a fast-growing AI product used globally. Be part of a cutting-edge AI startup driving innovation in chatbot automation. Work with a passionate and talented team that values knowledge-sharing and problem-solving. Growth-oriented environment with ample learning opportunities. Exposure to top-tier global clients and projects with real-world impact. Flexible work hours and an emphasis on work-life balance. A culture that fosters creativity, ownership, and collaboration. Job Type: Full-time Pay: βΉ600,000.00 - βΉ800,000.00 per year Benefits: Flexible schedule Leave encashment Provident Fund Schedule: Day shift Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Work Location: In person
Posted 1 day ago
4.0 - 6.0 years
0 Lacs
Pune, Maharashtra
On-site
Digital Assurance (DA)Pune Posted On 16 Jun 2025 End Date 31 Dec 2025 Required Experience 4 - 6 Years Basic Section Grade Role Senior QA Engineer Employment Type Full Time Employee Category Organisational Group Company NewVision Company Name New Vision Softcom & Consultancy Pvt. Ltd Function Business Units (BU) Department/Practice Digital Assurance (DA) Organization Unit Business Assurance / Functional Region APAC Country India Base Office Location Pune Working Model Hybrid Weekly Off Pune Office Standard State Maharashtra Skills Skill Highest Education GRADUATION/EQUIVALENT COURSE CERTIFICATION No data available Working Language ENGLISH Job Description Qualifications The Engineer will possess: 4+ Java & C# Development experience at a Financial Firm (preferred) Strong Analytical Skills, able to comprehend and dissect a problem Must have strong deductive reasoning & diagnostic problem-solving skills. Self-motivated, fast learner, willing to go outside of comfort zone and learn new things. Core Skills Hands on Java & C# Technologist with Reference Data Background Ability to synthesize business, functional requirements and design solution Prior Experience in working across regions and cross functional teams Partner with QA/Ops/Client teams in supporting Dev/QA/UAT initiatives Experience with source control systems such as GIT Knowledge in various branching/merging methodologies and release procedures. Experience with continuous integration build engines, such as Jenkins, GITLab Experience with IT automation tools Selenium, Maven, TestNG, BDD and XML. Exposure to Agile development processes and tools such as Confluence, JIRA, and Kanban. Scripting experience (Python, Perl, Bash, batch, etc.) and familiarity with Windows, Linux (RHEL preferred) and Unix. Experience with databases both relational (Oracle, MSSQL) and non-relational (NoSQL, Mongo) Experience in release engineering, configuration management, software development or related discipline.
Posted 1 day ago
0.0 - 4.0 years
0 Lacs
Mohali, Punjab
On-site
Job Information Date Opened 06/16/2025 Job Type Full time Industry IT Services Work Experience 3+ Years Salary 8-15 LPA City Mohali State/Province Punjab Country India Zip/Postal Code 160071 Job Description ABOUT XENONSTACK XenonStack is the fastest-growing data and AI foundry for agentic systems, which enables people and organizations to gain real-time and intelligent business insights. Building Agentic Systems for AI Agents with https://www.akira.ai Vision AI Platform with https://www.xenonstack.ai Inference AI Infrastructure for Agentic Systems - https://www.nexastack.ai THE OPPORTUNITY We are seeking an experienced DevOps Engineer with 3-6 years of experience in implementing and reviewing CI/CD pipelines, cloud deployments, and automation tasks. If you have a strong foundation in cloud technologies, containerization, and DevOps best practices, we would love to have you on our team. JOB ROLES AND RESPONSIBILITIES Develop and maintain CI/CD pipelines to automate the deployment and testing of applications across AWS and Private Cloud. Assist in deploying applications and services to cloud environments while ensuring optimal configuration and security practices. Implement monitoring solutions to ensure infrastructure health and performance; troubleshoot issues as they arise in production environments. Automate repetitive tasks and manage cloud infrastructure using tools like Terraform, CloudFormation, and scripting languages (Python, Bash). Work closely with software engineers to integrate deployment pipelines with application codebases and streamline workflows. Ensure efficient resource management in the cloud, monitor costs, and optimize usage to reduce waste. Create detailed documentation for DevOps processes, deployment procedures, and troubleshooting steps to ensure clarity and consistency across the team. Requirements SKILLS REQUIREMENTS 2-4 years of experience in DevOps or cloud infrastructure engineering. Proficiency in cloud platforms on AWS, and hands-on experience with their core services (EC2, S3, RDS, Lambda, etc.). Advanced knowledge of CI/CD tools such as Jenkins, GitLab CI, or CircleCI, and hands-on experience implementing and managing CI/CD pipelines. Experience with containerization technologies like Docker and Kubernetes for deploying applications at scale. Strong knowledge of Infrastructure-as-Code (IaC) using tools like Terraform or CloudFormation. Proficient in scripting languages such as Python and Bash for automating infrastructure tasks and deployments. Understanding of monitoring and logging tools like Prometheus, Grafana, ELK Stack, or CloudWatch to ensure system performance and uptime. Strong understanding of Linux-based operating systems and cloud-based infrastructure management. Bachelorβs degree in Computer Science, Information Technology, or related field. Benefits CAREER GROWTH AND BENEFITS Continuous Learning & Growth Access to training, certifications, and hands-on sessions to enhance your DevOps and cloud engineering skills. Opportunities for career advancement and leadership roles in DevOps engineering. Recognition & Rewards Performance-based incentives and regular feedback to help you grow in your career. Special recognition for contributions towards streamlining and improving DevOps practices. Work Benefits & Well-Being Comprehensive health insurance and wellness programs to ensure a healthy work-life balance. Cab facilities for women employees and additional allowances for project-based tasks. XENONSTACK CULTURE - JOIN US & MAKE AN IMPACT Here at XenonStack, we have a culture of cultivation with bold, courageous, and human-centric leadership principles. We value obsession and deep work in everything we do. We are on a mission to disrupt and reshape the category and welcome people with that mindset and ambition. If you are energised by the idea of shaping the future of AI in Business processes and enterprise systems, thereβs nowhere better for you than XenonStack Product Value and Outcome - Simplifying the user experience with AI Agents and Agentic AI 1) Obsessed with Adoption : We design everything with the goal of making AI more accessible and simplifying the business processes and enterprise systems essential to adoption. 2) Obsessed with Simplicity : We simplify even the most complex challenges to create seamless, intuitive experiences with AI agents and Agentic AI. Be a part of XenonStackβs Vision and Mission for Accelerating the world's transition to AI + Human Intelligence.
Posted 1 day ago
6.0 years
0 Lacs
Bengaluru, Karnataka
On-site
GE Healthcare Healthcare Science & Technology Organization Category Digital Technology / IT Senior Level Job Id R4025620 Relocation Assistance No Location Bengaluru, Karnataka, India, 560066 Job Description Summary As a Sr. Staff Software Engineer, you will be responsible for defining and implementing the observability strategy across our next-generation cloud-native healthcare services. You will lead the design of scalable, secure, and innovative observability frameworks that empower engineering teams to build resilient, performant, and diagnosable systems. This role is critical to ensuring our platform meets the highest standards of reliability, compliance, and operational excellence. GE Healthcare is a leading global medical technology and digital solutions innovator. Our mission is to improve lives in the moments that matter. Unlock your ambition, turn ideas into world-changing realities, and join an organization where every voice makes a difference, and every difference builds a healthier world. Job Description Roles and Responsibilities: In this role, you will: Define and evolve the observability vision and roadmap for our cloud-native healthcare platform. Architect and implement standardized observability frameworks (metrics, logs, traces, events, profiling) across services. Collaborate with platform, SRE, and product teams to instrument services using OpenTelemetry and other modern observability tooling. Build and maintain dashboards, alerts, and SLOs that reflect both technical and business health indicators. Evaluate, integrate, and optimize observability tools (e.g., Datadog, Prometheus, Grafana, Tempo, Loki, Elastic). Lead incident analysis and postmortem reviews , driving improvements in system resilience and observability coverage. Ensure observability practices align with healthcare compliance standards (e.g., HIPAA, HITRUST). Mentor engineers and promote a culture of observability-first development . Educational Qualifications: Bachelorβs or Masterβ s degree in Computer Science, Engineering, or a related technical field. Required Qualifications: 12+ years of experience in software engineering, SRE - Site Reliability Engineering or platform engineering roles. 6+ years of experience architecting observability solutions in cloud-native environments (Kubernetes, microservices, serverless). Deep expertise in observability pillars (metrics, logs, traces) and tools like Open Telemetry, Prometheus, Grafana, Datadog, etc. Strong programming/scripting skills (e.g., Go, Python, Bash, Terraform). Experience with distributed tracing , SLO/SLI frameworks , and incident response workflows . Familiarity with cloud platforms (AWS, GCP, or Azure) and CI/CD pipelines. Excellent communication and collaboration skills. Desired Qualifications: Experience in healthcare or regulated industries . Knowledge of data privacy and compliance (HIPAA, HITRUST). Experience with cost optimization and telemetry data governance . Contributions to open-source observability projects. Passion for building developer-friendly tools and frameworks. Business Acumen: Adept at navigating the organizational matrix; understanding people's roles, can foresee obstacles, identify workarounds, leverage resources and rally teammates. Understand how internal and/or external business model works and facilitate active customer engagement Able to articulate the value of what is most important to the business/customer to achieve outcomes Able to produce functional area information in sufficient detail for cross-functional teams to utilize, using presentation and storytelling concepts. Leadership: Demonstrated working knowledge of internal organization Foresee obstacles, identify workarounds, leverage resources, rally teammates. Demonstrated ability to work with and/or lead blended teams, including 3rd party partners and customer personnel. Demonstrated Change Management /Acceleration capabilities Strong interpersonal skills, including creativity and curiosity with ability to effectively communicate and influence across all organizational levels Proven analytical and problem resolution skills Ability to influence and build consensus with other Information Technology (IT) teams and leadership. Inclusion and Diversity GE Healthcare is an Equal Opportunity Employer where inclusion matters. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law. We expect all employees to live and breathe our behaviors: to act with humility and build trust; lead with transparency; deliver with focus, and drive ownership β always with unyielding integrity. Our total rewards are designed to unlock your ambition by giving you the boost and flexibility you need to turn your ideas into world-changing realities. Our salary and benefits are everything youβd expect from an organization with global strength and scale, and youβll be surrounded by career opportunities in a culture that fosters care, collaboration and support. #Everyroleisvital #LI-Hybrid #LI-SM1 Additional Information Relocation Assistance Provided: No
Posted 1 day ago
0.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Here at Appian, our core values of Respect, Work to Impact, Ambition, and Constructive Dissent & Resolution define who we are. In short, this means we constantly seek to understand the best for our customers, we go beyond completion in our work, we strive for excellence with intensity, and we embrace candid communication. These values guide our actions and shape our culture every day. When you join Appian, you'll be part of a passionate team that's dedicated to accomplishing hard things. As a DevOps & Test Infrastructure Engineer your goal is to design, implement, and maintain a robust, scalable, and secure AWS infrastructure to support our growing testing needs. You will be instrumental in building and automating our DevOps pipeline, ensuring efficient and reliable testing processes. This role offers the opportunity to shape our performance testing environment and contribute directly to the quality and speed of our clients' Appian software delivery. Responsibilities Architecture Design: Design and architect a highly scalable and cost-effective AWS infrastructure tailored for testing purposes, considering security, performance, and maintainability. DevOps Pipeline Design: Architect a secure and automated DevOps pipeline on AWS, integrating tools such as Jenkins for continuous integration/continuous delivery (CI/CD) and Locust for performance testing. Infrastructure as Code (IaC): Implement infrastructure as code (IaC) using tools like Terraform or AWS CloudFormation to enable automated deployment and scaling of the testing environment. Security Implementation: Implement and enforce security best practices across the AWS infrastructure and DevOps pipeline, ensuring compliance and protecting sensitive data. Jenkins or similar CI/CD automation platforms Configuration & Administration: Install, configure, and administer Jenkins, including setting up build pipelines, managing plugins, and ensuring its scalability and reliability. Locust Configuration & Administration: Install, configure, and administer Locust for performance and load testing. Automation: Automate the deployment, scaling, and management of all infrastructure components and the DevOps pipeline. Monitoring and Logging: Implement comprehensive monitoring and logging solutions to proactively identify and resolve issues within the testing environment, including also exposing testing results available for consumption. Troubleshooting and Support: Provide expert-level troubleshooting and support for the testing infrastructure and DevOps pipeline. Collaboration: Work closely with development, QA, and operations teams to understand their needs and provide effective solutions. Documentation: Create and maintain clear and concise documentation for the infrastructure, pipeline, and processes. Continuous Improvement: Stay up-to-date with the latest AWS services and DevOps best practices, and proactively identify opportunities for improvement. Qualifications Proven experience in designing and implementing scalable architectures on Amazon Web Services (AWS). Strong understanding of DevOps principles and practices. Hands-on experience with CI/CD tools, for example Jenkins, including pipeline creation and administration. Experience with performance testing tools, preferably Locust, including test design and execution. Proficiency in infrastructure as code (IaC) tools such as Terraform or AWS CloudFormation. Solid understanding of security best practices in cloud environments. Experience with containerization technologies like Docker and orchestration tools like Kubernetes or AWS ECS (preferred). Familiarity with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack, CloudWatch). Excellent scripting skills (e.g., Python, Bash). Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Ability to work independently and as part of a team. AWS certifications (e.g., AWS Certified Solutions Architect β Associate/Professional, AWS Certified DevOps Engineer β Professional). Experience with other testing tools and frameworks. Experience with agile development methodologies. Education B.S. in Computer Science, Engineering, Information Systems, or related field. Working Conditions Opportunity to work on enterprise-scale applications across different industries. This role is based at our office at WTC 11th floor, Old Mahabalipuram Road, SH 49A, Kandhanchavadi, Kottivakkam, Chennai, Tamil Nadu 600041, India. Appian was built on a culture of in-person collaboration, which we believe is a key driver of our mission to be the best. Employees hired for this position are expected to be in the office 5 days a week to foster that culture and ensure we continue to thrive through shared ideas and teamwork. We believe being in the office provides more opportunities to come together and celebrate working with the exceptional people across Appian. Tools and Resources Training and Development: During onboarding, we focus on equipping new hires with the skills and knowledge for success through department-specific training. Continuous learning is a central focus at Appian, with dedicated mentorship and the First-Friend program being widely utilized resources for new hires. Growth Opportunities: Appian provides a diverse array of growth and development opportunities, including our leadership program tailored for new and aspiring managers, a comprehensive library of specialized department training through Appian University, skills based training, and tuition reimbursement for those aiming to advance their education. This commitment ensures that employees have access to a holistic range of development opportunities. Community: We'll immerse you into our community rooted in respect starting on day one. Appian fosters inclusivity through our 8 employee-led affinity groups. These groups help employees build stronger internal and external networks by planning social, educational, and outreach activities to connect with Appianites and larger initiatives throughout the company. About Appian Appian is a software company that automates business processes. The Appian AI-Powered Process Platform includes everything you need to design, automate, and optimize even the most complex processes, from start to finish. The world's most innovative organizations trust Appian to improve their workflows, unify data, and optimize operationsβresulting in better growth and superior customer experiences. For more information, visit appian.com. [Nasdaq: APPN] Follow Appian: Twitter, LinkedIn. Appian is an equal opportunity employer that strives to attract and retain the best talent. All qualified applicants will receive consideration for employment without regard to any characteristic protected by applicable federal, state, or local law. Appian provides reasonable accommodations to applicants in accordance with all applicable laws. If you need a reasonable accommodation for any part of the employment process, please contact us by email at ReasonableAccommodations@appian.com . Please note that only inquiries concerning a request for reasonable accommodation will be responded to from this email address. Appian's Applicant & Candidate Privacy Notice
Posted 1 day ago
0.0 - 4.0 years
0 Lacs
Gurugram, Haryana
On-site
Gurugram, Haryana Work Type: Full Time ABOUT US: Paxcom a leading Digital Solution Provider is a part of Paymentus now, a leading electronic bill payment provider. PaymentUs leads the North American marketplace in electronic bill payment solutions and have recently signed a partnership with Paypal and Alexa. Recognized by Deloitte as one of the fastest growing companies in North America, Paymentus is the premier provider of innovative, reliable, and secure electronic bill presentment and payment services for more than 1300 clients leading the Utility, Telecom, Auto Finance, Insurance, Consumer Finance, and Health industries. Our comprehensive eBilling and Payment Platform allows our clients to provide a unified customer bill-pay experience that includes online, mobile, IVR, text, kiosk, and agent-assisted channels, as well as a full range of customer communication options. For more details, please visit www.paymentus.com Job Location: Gurugram Job Type: Permanent Interview process: Coding challenge >> Round 1 (Technical Webcam>> Managerial webcam round (Tech + Managerial) Join our team as an AI/ML Specialist and contribute to the delivery of cutting-edge capabilities by leveraging the latest technologies. In this role, you will collaborate closely with stakeholders and technical team members to drive impactful solutions. Requirements: Minimum of 4 years of relevant experience in Data Science, Machine Learning, and AI techniques, with a strong background in open source technologies. Exposure to Gen AI models such as Falcon, Llama 2, GPT 3.5 & 4, and Prompt Engineering. Experience in creating basic RAG pipelines. Familiarity with cloud services such as AWS (EC2, S3, ECR). Expertise in at least one of the following areas: Time Series Analysis, Standard Machine Learning Algorithms, or Deep Learning. Experience with statistical analysis, data mining, temporal and pattern analysis, correlation of events, predictive modeling, and pattern recognition for various use cases. Area of expertise: In-depth knowledge and hands-on experience with at least one object detection framework including TFOD, Detectron, and YOLO, operating within a large-scale distributed platform. Or, Solid understanding of Deep Learning fundamentals (CNN, RNN, attention/memory) and extensions (Transformer, LSTM, ResNet, etc.). Or, Knowledge of state-of-the-art ML algorithms such as BERT, ELMo, GPT, GPT-2, XLNET, T5, LSTMs, CRFs, etc., APIs, ONNX, and open-source methods. Strong experience with data science tools including Python scripting, CUDA, numpy, scipy, matplotlib, scikit-learn, bash scripting, and Linux environment. on-making and the ability to justify actions effectively. Why Join Us? Empowerment to focus on pivotal tasks. Embrace a flexible and laid-back work atmosphere. Prioritize work-life balance. Appreciate collaborating with a goal-driven team. Interact with an approachable, supportive, and accomplished management team. Competitive compensation package. Engage with cutting-edge technologies as they emerge in the market. Witness the direct impact of your code on the lives of millions of customer.
Posted 1 day ago
0.0 - 10.0 years
0 Lacs
Delhi
On-site
SUMMARY We are seeking an experienced and versatile Senior Science Writer to lead the creation of high-quality science communication materials. This role is ideal for someone who can translate complex scientific ideas into compelling, accessible narratives for a variety of audiences, including policy makers, funders, media, and the general public. The ideal candidate has deep subject-matter literacy across scientific domains, impeccable writing and editing skills, and a proven track record of thought leadership in science communication. You will work closely with researchers, subject experts, and leadership to produce content that is accurate, strategic, and engaging. Location - Delhi ABOUT US - https://www.wadhwaniai.org/ Wadhwani AI is a nonprofit institute building and deploying applied AI solutions to solve critical issues in public health, agriculture, education, and urban development in underserved communities in the global south. We collaborate with governments, social sector organizations, academic and research institutions, and domain experts to identify real-world problems, and develop practical AI solutions to tackle these issues with the aim of making a substantial positive impact. We have over 30+ AI projects supported by leading philanthropies such as Bill & Melinda Gates Foundation, USAID and Google.org. With a team of over 200 professionals, our expertise encompasses AI/ML research and innovation, software engineering, domain knowledge, design and user research. In the Press: Our Founder Donors are among the Top 100 AI Influencers G20 Indiaβs Presidency: AI Healthcare, Agriculture, & Education Solutions Showcased Globally. Unlocking the potentials of AI in Public Health Wadhwani AI Takes an Impact-First Approach to Applying Artificial Intelligence - data.org Winner of the H&M Foundation Global Change Award 2022 Sole Indian Winners of the 2019 Google AI Impact Challenge, and the first in the Asia Pacific to host Google Fellows Cultures page of Wadhwani AI - https://www.wadhwaniai.org/culture/ ROLES AND RESPONSIBILITIES Research, write, and edit long- and short-form content such as reports, white papers, case studies, op-eds, web copy, funding proposals, and press materials. Collaborate with researchers, program teams, and communications leads to frame scientific work in ways that are relevant and compelling for external stakeholders. Develop content strategies that align with organizational goals and communication campaigns. Ensure scientific accuracy while optimizing for clarity, tone, and impact. Work with design, digital, and multimedia teams to produce content that is visually engaging and accessible. Stay abreast of emerging trends in science communication, journalism, and public engagement. Mentor junior writers or editors and help build internal capacity in science storytelling. REQUIREMENTS A postgraduate or doctoral degree (PhD preferred) in science, technology, economics, medicine, or a related field. 7β10 years of professional experience in science writing, journalism, communications, or a related field. Demonstrated ability to write for diverse audiencesβtechnical and non-technicalβacross a range of formats. Deep understanding of scientific concepts and the ability to critically interpret peer-reviewed research. Outstanding writing, editing, and storytelling skills with a strong portfolio to showcase. Experience working in or with scientific organizations, think tanks, research institutes, universities, or media outlets. Ability to manage multiple projects and deadlines in a fast-paced environment. A graduate degree in science, journalism, communications, or a related field is preferred. Good to Have Familiarity with donor-funded ecosystems (e.g., philanthropy, foundations, multilateral agencies). Experience in AI, global health, agriculture & education Multimedia or digital storytelling skills (e.g., scripting for video, data visualization, podcasts). We are committed to promoting diversity and the principle of equal employment opportunity for all our employees and encourage qualified candidates to apply irrespective of religion or belief, ethnic or social background, gender, gender identity, and disability. If you have any questions, please email us at careers@wadhwaniai.org.
Posted 1 day ago
0.0 years
0 Lacs
Delhi
On-site
Job requisition ID :: 84448 Date: Jun 16, 2025 Location: Delhi Designation: Assistant Manager Entity: Your potential, unleashed. Indiaβs impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realize your potential amongst cutting edge leaders, and organizations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. The team Technology & Transformation is about much more than just the numbers. Itβs about attesting to accomplishments and challenges and helping to assure strong foundations for future aspirations. Deloitte exemplifies what, how, and why of change so youβre always ready to act ahead. Learn more about Technology & Transformation Practice Job Summary: We are looking for a skilled Microsoft Sentinel SIEM Engineer to join our Cybersecurity Operations team. The ideal candidate will be responsible for the deployment, configuration, integration, and operational support of Microsoft Sentinel as a core SIEM platform, ensuring efficient threat detection, incident response, and security monitoring. Key Responsibilities: Design, implement, and manage Microsoft Sentinel for enterprise security monitoring. Develop and maintain analytic rules (KQL-based) and detection use cases aligned with MITRE ATT&CK. Integrate various log sources (on-prem and cloud) including Microsoft 365, Azure, AWS, endpoints, firewalls, etc. Create and manage playbooks using Azure Logic Apps for automated incident response. Monitor data connectors and ensure log ingestion health and optimization. Conduct threat hunting and deep dive analysis using Kusto Query Language (KQL). Optimize performance, cost, and retention policies in Sentinel and Log Analytics workspace. Collaborate with SOC analysts, incident responders, and threat intelligence teams. Participate in use case development, testing, and fine-tuning of alert rules to reduce false positives. Support compliance and audit requirements by producing relevant reports and documentation. Required Skills & Qualifications: 3+ years of experience working with Microsoft Sentinel SIEM. Strong hands-on experience with KQL (Kusto Query Language) . Solid understanding of log ingestion from different sources including Azure, O365, Defender, firewalls, and servers. Experience with Azure Logic Apps for playbook creation and automation. Familiarity with incident response workflows and threat detection methodologies. Knowledge of security frameworks such as MITRE ATT&CK, NIST, or ISO 27001 . Microsoft certifications such as SC-200 (Microsoft Security Operations Analyst) or AZ-500 are preferred. Good to Have: Experience with Defender for Endpoint, Defender for Cloud, Microsoft Purview. Knowledge of other SIEM platforms (e.g., Splunk, QRadar) for hybrid environments. Scripting experience (PowerShell, Python) for automation and integration. Certifications (Preferred but not mandatory): SC-200 : Microsoft Security Operations Analyst AZ-500 : Microsoft Azure Security Technologies CEH , CompTIA Security+ , or equivalent How youβll grow Connect for impact Our exceptional team of professionals across the globe are solving some of the worldβs most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report. Empower to lead You can be a leader irrespective of your career level. Our colleagues are characterised by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership. Inclusion for all At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters. Drive your career At Deloitte, you are encouraged to take ownership of your career. We recognise there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte. Everyoneβs welcomeβ¦ entrust your happiness to us Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Hereβs a glimpse of things that are in store for you. Interview tips We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organisation and the business area youβre applying to. Check out recruiting tips from Deloitte professionals.
Posted 1 day ago
0.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
You deserve to do what you love, and love what you do β a career that works as hard for you as you do. At Fiserv, we are more than 40,000 #FiservProud innovators delivering superior value for our clients through leading technology, targeted innovation and excellence in everything we do. You have choices β if you strive to be a part of a team driven to create with purpose, now is your chance to Find your Forward with Fiserv. Responsibilities Requisition ID R-10358287 Date posted 06/16/2025 End Date 06/21/2025 City Noida State/Region Uttar Pradesh Country India Location Type Onsite Calling all innovators β find your future at Fiserv. Weβre Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day β quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, weβre involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Specialist, Systems Engineering What does a successful System Engineer - Specialist do? As an experienced member of our Signature Core data center team, you will be responsible for effective management of activities like incident management, execution and validation of changes, supporting client systems, patching, validation and deployment of releases. You will collaborate with cross-functional teams to implement automation and continuous improvement processes to enhance efficiency and reduce downtime. The ideal candidate will have a strong background in supporting AS400 iSeries, RPGLE, RPG400, scripting, Dynatrace, Ansible, YAML, Harness, Splunk and ASM. What you will do: Strong Hands-on experience in RPG400/IV, RPG Free, CL, SQL, Embedded SQL, Query & ILE Recent and advanced experience with RPG (ILE/FREE) using Procedures, Service Programs, and Functions BFSI/Banking domain knowledge is desirable Experienced in iSeries Navigator and Integrated File System (IFS) Preferred Knowledge. Knowledge of Web Services, JSON, REST APIs will be added advantage Change management experience and familiarity with a change management tool Investigate production issues, respond based on production defect severity SLAs. Manage and respond to users on timely manner. Log incident ticket for production issues and user queries Follow up on defect and incident closure and meet incident closure KPI. Make sure system availability as per respective agreed SLA. Ensure daily start of day (SOD) and end of day (EOD) execution for supported applications complete successfully. Knowledge with 3rd party monitoring tools like Splunk, Dynatrace, Moogsoft. Ensure application incident & task documentation is properly updated for each production release. Enthusiastic, hardworking, proactive and goal-oriented, with excellent communication and presentation skills, demonstrated professionalism and attention to detail. Proven ability to work and resolve production incidents under strict time constraints and provide workarounds Deploying releases, configuring, and maintaining Windows operating system Managing regular system patches, updates and security configurations Managing user accounts, groups, and permissions. What you will need to have: Bachelorβs degree preferably in Computer Science, Electrical/Computer Engineering, or related field Overall, 5-10 years of experience. Experience working with RPG400/IV, RPG Free, CL, SQL, Embedded SQL, Query & ILE. Experience with Modern scripting language like Python will be plus Documents problems and corrective procedures. Ability to recommend and implement process improvements. Having exposure to Python scripting would be added benefit for automation. Flexible to work in shift or weekend on business demand. Thank you for considering employment with Fiserv. Please: Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our commitment to Diversity and Inclusion: Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note to agencies: Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning about fake job posts: Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Posted 1 day ago
0.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Noida, Uttar Pradesh, India Business Intelligence BOLD is seeking for QA professional who will work directly with the BI Development team to validate Business Intelligence solutions. He will build test strategy and test plans and test cases for ETL and Business Intelligence components. He will also validate SQL queries related to test cases and produce test summary reports. Job Description ABOUT THIS TEAM BOLD Business Intelligence(BI) team is a centralized team responsible for managing all aspects of the organization's BI strategy, projects and systems. BI team enables business leaders to make data-driven decisions by providing reports and analysis. The team is responsible for developing and manage a latency-free credible enterprise data warehouse which is a data source for decision making and input to various functions of the organization like Product, Finance, Marketing, Customer Support etc. BI team has four sub-components as Data analysis, ETL, Data Visualization and QA. It manages deliveries through Snowflake, Sisense and Microstrategy as main infrastructure solutions. Other technologies including Python, R, Airflow are also used in ETL, QA and data visualizations. WHAT YOUβLL DO Work with Business Analysts, BI Developers to translate Business requirements into Test Cases Responsible for validating the data sources, extraction of data, applying transformation logic, and loading the data in the target tables. Designing, documenting and executing test plans, test harness, test scenarios/scripts & test cases for manual, automated & bug tracking tools. WHAT YOUβLL NEED Experience in Data Warehousing / BI Testing, using any ETL and Reporting Tool Extensive experience in writing and troubleshooting SQL Queries using any of the Databases β Snowflake/ Redshift / SQL Server / Oracle Exposure to Data Warehousing and Dimensional Modelling Concepts Experience in understanding of ETL Source to Target Mapping Document Experience in testing the code on any of the ETL Tools Experience in Validating the Dashboards / Reports on any of the Reporting tools β Sisense / Tableau / SAP Business Objects / MicroStrategy Hands-on experience and strong understanding of Software Development Life Cycle (SDLC) and Software Testing Life Cycle (STLC). Good experience of Quality Assurance methodologies like Waterfall, V-Model, Agile, Scrum. Well versed with writing detailed test cases for functional & non-functional requirements. Experience on different types of testing that includes Black Box testing, Smoke testing, Functional testing, System Integration testing, End-to-End Testing, Regression testing & User Acceptance testing (UAT) & Involved in Load Testing, Performance Testing & Stress Testing. Β· Expertise in using TFS / JIRA / Excel for writing the Test Cases and tracking the Exposure in scripting languages like Python to create automated Test Scripts or Automated tools like Query Surge will be an added advantage An effective communicator with strong analytical abilities combined with skills to plan, implement & presentation of projects EXPERIENCE- Senior QA Engineer, BI: 4.5 years+ #LI-SV1 Benefits Outstanding Compensation Competitive salary Tax-friendly compensation structure Bi-annual bonus Annual Appraisal Equity in company 100% Full Health Benefits Group Mediclaim, personal accident, & term life insurance Group Mediclaim benefit (including parents' coverage) Practo Plus health membership for employees and family Personal accident and term life insurance coverage Flexible Time Away 24 days paid leaves Declared fixed holidays Paternity and maternity leave Compassionate and marriage leave Covid leave (up to 7 days) Additional Benefits Internet and home office reimbursement In-office catered lunch, meals, and snacks Certification policy Cab pick-up and drop-off facility About BOLD We Transform Work Lives As an established global organization, BOLD helps people find jobs. Our story is one of growth, success, and professional fulfillment. We create digital products that have empowered millions of people in 180 countries to build stronger resumes, cover letters, and CVs. The result of our work helps people interview confidently, finding the right job in less time. Our employees are experts, learners, contributors, and creatives. We Celebrate And Promote Diversity And Inclusion We value our position as an Equal Opportunity Employer. We hire based on qualifications, merit, and our business needs. We don't discriminate regarding race, color, religion, gender, pregnancy, national origin or citizenship, ancestry, age, physical or mental disability, veteran status, sexual orientation, gender identity or expression, marital status, genetic information, or any other applicable characteristic protected by law.
Posted 1 day ago
0.0 - 6.0 years
0 Lacs
Gurugram, Haryana
On-site
Location Gurugram, Haryana, India This job is associated with 2 categories Job Id GGN00001963 Information Technology Job Type Full-Time Posted Date 06/16/2025 Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create whatβs next. Letβs define tomorrow, together. Description United's Digital Technology team is comprised of many talented individuals all working together with cutting-edge technology to build the best airline in the history of aviation. Our team designs, develops and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Job overview and responsibilities As a Senior Quality Engineer at United Airlines, you will be responsible for the delivery of enterprise software testing projects and programs. In this role, you will design, implement, and execute test strategies and frameworks for large-scale enterprise software programs. Additionally, you will collaborate closely with US Quality Managers and the Quality Leads to implement quality governance, including automation, performance testing, quality gates, key metrics (KPIs), and tool selection. Lead Enterprise Project β Prepare and execute enterprise project / program test strategies Collaborate with cross-functional teams to ensure alignment with business goals and technical requirements Govern automation standards, best practices, conduct automation audits and assess ROI Own and maintain automation/performance artifacts, tools, licenses, framework Identify and Maintain testing KPI's, track trends and own the power BI Support SonarQube Implementation, Governance & Best Practices DevOps CICD implementation consultation This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications Whatβs needed to succeed (Minimum Qualifications): Bachelor's degree in computer science AWS Cloud Practitioner ISQTB / CSTE 4-6 years of relevant experience Programming / Scripting Software Test Life Cycle Agile & Waterfall Methodologies Backend Testing (API, Mainframe, Middleware) Release Management Processes Cloud Technologies Test Data Modeling Support DevOps CICD implementation Able to work with distributed global teams Ability to work under time constraint Support during off / CST hours during production deployments Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position What will help you propel from the pack (Preferred Qualifications): Master's degree Airline Domain Knowledge
Posted 1 day ago
0.0 - 6.0 years
0 Lacs
Gurugram, Haryana
On-site
Location Gurugram, Haryana, India This job is associated with 2 categories Job Id GGN00001967 Information Technology Job Type Full-Time Posted Date 06/16/2025 Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create whatβs next. Letβs define tomorrow, together. Description United's Digital Technology team designs, develops, and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Find your future at United! Weβre reinventing what our industry looks like, and what an airline can be β from the planes we fly to the people who fly them. When you join us, youβre joining a global team of 100,000+ connected by a shared passion with a wide spectrum of experience and skills to lead the way forward. Achieving our ambitions starts with supporting yours. Evolve your career and find your next opportunity. Get the care you need with industry-leading health plans and best-in-class programs to support your emotional, physical, and financial wellness. Expand your horizons with travel across the worldβs biggest route network. Connect outside your team through employee-led Business Resource Groups. Create whatβs next with us. Letβs define tomorrow together. Job overview and responsibilities As a Senior Automation Test Engineer of Digital Technology at United Airlines, you will be responsible for the delivery of enterprise software testing projects and programs, operation and capital projects with automation first approach. In this role, you will design, implement, and execute automation test strategies and frameworks for all deliverables. Additionally, you will collaborate closely with US Quality Managers and Quality Leads to implement quality governance, quality gates, risk assessment, production signoffs, key metrics (KPIs), and tools selection. Additionally, a Senior Automation Test Engineer, you should have excellent problem-solving skills, attention to detail, and the ability to work in a fast-paced, team-oriented environment. Lead the design and implementation of automation and manual test strategies for various software and systems utilizing best practices and standards. Collaborate with software developers, QE analysts, and system engineers to identify system requirements and ensure quality is met from test planning to production deployment with automation first approach. Own and maintain automation artifacts, tools, licenses & framework. Govern automation standards & best practices. Conduct automation audits and assess ROIs. Manage and mitigate testing related risks and issues. Identify and maintain testing KPI's, track trends and own the power BI reports. Integrate automation frameworks with continuous integration and deployment pipelines Integrate GenAI into existing automation framework and improve the quality of the automation test scripts for functional, regression, sanity and end-to-end testing. This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications Whatβs needed to succeed (Minimum Qualifications): Bachelor's degree in Computer Science, Computer Engineering or 4 years of relevant work experience 4-6 years of experience in Software Automation AWS Cloud Practitioner ISQTB / CSTE or similar Loadrunner Test automation programming / scripting with solid skills in one of the tools for UI, API and Desktop β API Testing, ReadyAPI, Rest Assure, Selenium (UI), Cloud testing, ADO/JIRA or similar, Mainframe testing, Postman, Fiddler Software Test Life Cycle Agile & Waterfall Methodologies Backend Testing (API, Mainframe, Middleware) Release Management Processes Cloud Technologies Test Data Modeling Support DevOps CICD implementation Able to work with distributed global teams. Ability to work under time constraint. Support during off / CST hours during production deployments What will help you propel from the pack (Preferred Qualifications): Master's degree Airline Domain Knowledge AccelQ, AWS - (Dynamo DB, Lambda, Cloud Watch, Aurora DB), Java, Dynatrace, Github actions, Harness, Kibana
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
India has a thriving job market for scripting professionals, with numerous opportunities available across various industries. Scripting roles are in high demand as companies increasingly rely on automation and efficient processes. If you are considering a career in scripting, here is a detailed guide to help you navigate the job market in India.
These cities are known for their strong IT sectors and are actively hiring professionals with scripting skills.
The average salary range for scripting professionals in India varies based on experience levels. Entry-level positions can expect to earn between βΉ3-5 lakhs per annum, while experienced professionals can command salaries in the range of βΉ10-20 lakhs per annum.
A typical career path in scripting may involve starting as a Junior Developer, progressing to a Senior Developer, and then moving on to roles such as Tech Lead or Architect. With experience and expertise, one can also explore opportunities in management or consulting.
In addition to scripting, professionals in this field are often expected to have knowledge of: - Programming languages such as Python, Ruby, or Perl - Automation tools like Ansible or Puppet - Database management skills - Familiarity with DevOps practices
Here are 25 interview questions you may encounter when applying for scripting roles:
As you prepare for scripting job interviews, remember to showcase your problem-solving skills, attention to detail, and ability to work efficiently. With the right skills and mindset, you can confidently pursue opportunities in the scripting job market in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2