Home
Jobs

4685 Apache Jobs - Page 19

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Details Position: Database Engineer 2 Experience: 5-8 years Location: Nagawara, Bangalore Work Mode: Onsite Notice Period: Immediate - 15 days serving Shift Timings: 11 am to 8 pm IST or 1 pm to 10 pm IST / Flexible to work in shifts as needed Essential Job Functions Implement, maintain, and optimize database structures with a focus on Datawarehouse (Yellowbrick preferred), SQL Server, Cloud Native Databases, and Couchbase. Manage on-premises database solutions and ensure seamless integration with cloud services such as AWS and Azure. Oversee the installation, configuration, patching, and upgrading of database systems and software. Ensure database performance, security, and integrity by developing and enforcing database standards and best practices. Collaborate with IT and development teams to manage Confluence and Kafka integrations Provide strategic input on database technologies and data management strategies. Participate in disaster recovery planning and database backup procedures. Offer technical support and training to staff on database management and troubleshooting Design and implement tools and solutions for business continuity and disaster management. Minimum Qualifications Education required: bachelor’s degree or equivalent, relevant work experience Area of Study: Computer Science, Information Technology or related field of study or equivalent relevant work experience Experience Details: Managing and administering databases Skills (Minimum Of 3 Skills, Maximum Of 8 Skills) Microsoft Azure SQL Microsoft SQL Server PostgreSQL Click or tap here to enter text. Azure Cosmos DB Amazon Aurora MongoDB Apache Kafka Infrastructure as Code (Terraform) Knowledge and Abilities (Minimum of 3 skills , maximum of 8 skills) System Integration: Expertise in integrating various database systems with tools like Confluence and Kafka Security Management: Proficiency in implementing effective database security measures. Infrastructure as Code, the ability to develop, deploy and manage Database Technologies thru Infrastructure as Code (Terraform) Project Management: Skills to manage projects, timelines, and resources effectively. Certifications: Certification as a database administrator for either Azure or AWS. Regulatory Compliance: Understanding of data protection regulations and the ability to ensure compliance. Skills: microsoft azure sql,database,sql,postgresql,kafka,azure cosmos db,infrastructure,amazon aurora,microsoft sql server,mongodb,azure,apache kafka,infrastructure as code (terraform) Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Design, provision, and document a production-grade AWS micro-service platform for a Apache-powered ERP implementation—hitting our 90-day “go-live” target while embedding DevSecOps guard-rails the team can run without you. Key Responsibilities Cloud Architecture & IaC Author Terraform modules for VPC, EKS (Graviton), RDS (MariaDB Multi-AZ), MSK, ElastiCache, S3 lifecycle, API Gateway, WAF, Route 53. Implement node pools (App, Spot Analytics, Cache, GPU) with Karpenter autoscaling. CI/CD & GitOps Set up GitHub Actions pipelines (lint, unit tests, container scan, Terraform Plan). Deploy Argo CD for Helm-based application roll-outs (ERP, Bot, Superset, etc.). DevSecOps Controls Enforce OPA Gatekeeper policies, IAM IRSA, Secrets Manager, AWS WAF rules, ECR image scanning. Build CloudWatch/X-Ray dashboards; wire alerting to Slack/email. Automation & DR Define backup plans (RDS PITR, EBS, S3 Std-IA → Glacier). Document cross-Region fail-over run-book (Route 53 health-checks). Standard Operating Procedures Draft SOPs for patching, scaling, on-call, incident triage, budget monitoring. Knowledge Transfer (KT) Run 3× 2-hour remote workshops (infra deep-dive, CI/CD hand-over, DR drill). Produce “Day-2” wiki: diagrams (Mermaid), run-books, FAQ. Required Skill Set 8+ yrs designing AWS micro-service / Kubernetes architectures (ideally EKS on Graviton). Expert in Terraform , Helm , GitHub Actions , Argo CD . Hands-on with RDS MariaDB , Kafka (MSK) , Redis , SageMaker endpoints . Proven DevSecOps background: OPA, IAM least-privilege, vulnerability scanning. Comfortable translating infra diagrams into plain-language SOPs for non-cloud staff. Nice-to-have: prior ERP deployment experience; WhatsApp Business API integration; EPC or construction IT domain knowledge. How Success Is Measured Go-live readiness — Production cluster passes load, fail-over, and security tests by Day 75. Zero critical CVEs exposed in final Trivy scan. 99 % IaC coverage — manual console changes not permitted. Team self-sufficiency — internal staff can recreate the stack from scratch using docs + KT alone. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Maharashtra, India

On-site

Linkedin logo

System Administrator Brennan. Where true performance thrives. At Brennan, we believe that how technology is delivered is every bit as important as what the technology is. We focus on creating real and relevant value for customers with solutions that fit their specific needs and always reflect their true interests. It’s a claim backed by our True Performance System - a way of working engineered to get us closer, and deliver better, for our customers and their actual experience of technology. Why join Brennan True performance for our customers starts with a true belief in our people. It’s why we’ve structured our business to help our teams, and their talents, shine bright. It's why we’ve created a workplace where people of all backgrounds, beliefs and experiences are welcomed and empowered. And it’s why we’ve built an organisation where real innovation makes a genuine impact and generates true rewards for our team members. True rewards In addition to competitive remuneration, Brennan offers extensive benefits, including: Training and certification bonuses Culture Awards that recognise excellence Brennan Daredevils - our annual, all-expenses paid trip awarded to our top performers and outstanding contributors Vibrant, fun social activities Discounted hardware and software An environment that embraces learning and development The Role To provide 1st, 2nd or 3rd level technical support to Brennan IT clients, engineers and staff, 1st level monitoring and technical support to Brennan IT clients and staff. To coordinate, collaborate and escalate incidents within stipulated timelines, maintain existing cloud/infrastructure services and ensure that environment runs in an optimal way. and Continuously Improve the Efficiency and Excellence of Service Delivery as Measured by Client facing Surveys and Ratings in every department you are part of. 2-5 yrs minimum experience in Windows Server Administration / System Administration / Wintel Administration / Hyper V Administration / IT Technical Support role/ for international clients, preferably in Managed Services IT provider / IT Companies. Role Responsibilities Maintaining/contributing to KMS for client and internal team, for both technical & processes Server monitoring using SCOM, N-ABLE, Logic Monitor, Basic Intune / SCCM configurations Understanding of O365, Mimecast, Intune, Azure integration Vendor Management (hardware and Software vendors - HP, Dell, MS, VM, Citrix and others) Storage understanding NAS, SAN e.g., data domain, IBM, Netapp, Hitachi, Fujitsu, HP 3par Administration of Windows Server, groups, group policies, DNS, DHCP Understanding of backups, replications for Veeam, Symantec, Zetro, Commvault On premises backup alert management, monitoring and restoration SSL certificate renewal and installation on various roles (IIS, ADFS, ADC-Netscaler, Apache, SQL Reporting Services, WAS, Load Balancers) Smooth and timely customer engagement Disk, CPU, Snapshot management provisioning Monitor, manage experience with ESXI/Hyper-V hosts, Nutanix, Dell, HP SimpliVity Manage understand Failover Cluster, NLB Citrix/RDS/WVD knowledge with application publishing, upgrading, managing securely Manage windows update compliance. Antivirus Management, Sophos, Defender. Patching ESXi hosts and vCenter updates Knowledge of vulnerability management and critical remediation Change management, performing RCA and able to clearly articulate the actions/outcomes Key Competencies and Qualifications required Knowledge of as many more technologies like VMware, Windows Hypervisors, Azure Administrations, O365, Mimecast, SQL Administration, Windows Administration. Should have experience in VM Deployment, VM Migration, managing host clusters Extremely high-level attention to detail with methodical troubleshooting process Good verbal and written communication skills Must have knowledge of Storage technologies like HP, IBM, Dell, Cisco servers, O365, Azure Administration) Proactive vs Reactive approach ITIL Service Management Foundation accreditation Essential Skills Windows Server including 2016/2019, DNS, DHCP, Group Policy Active Directory 2012 and above VMware / Virtualization (Hyper - V, VMware) O365, Azure Administration Desired Skills Exposure to Backup tools like Veeam/Commvault/Backup exec Windows Administration certifications 2012 and above Exposure to SAN /NAS MS Azure and Office 365 Administration Symantec Endpoint or McAfee or Sophos or Sentinel One or CrowdStrike Exposure to Blade servers and configurations ITIL Foundation and ServiceNow ITSM tool Brennan is an equal opportunity employer Show more Show less

Posted 3 days ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Come work at a place where innovation and teamwork come together to support the most exciting missions in the world! We are seeking a talented Lead Software Engineer to develop features for our EDR Product. Working with a team of engineers and architects, you will be responsible for prototyping, designing, developing, and supporting a highly scalable SaaS based EDR product. This is a great opportunity to be an integral part of a team building Qualys’ next generation Micro-Services based technology platform and work on challenging and business-impacting projects. Responsibilities: You will be designing and developing EDR Product in the cloud. You will be building highly scalable microservices and data processing pipelines. You will be working on Java based microservices with clean, extensible code adopting suited design principles and patterns. You will be responsible to design, develop, maintain products to process events and serve REST APIs. Researching and implementing for code design, adoption of new technologies and skills. Qualifications: Bachelors/Masters/Doctorate in Computer Science or equivalent 7+ years of experience with Java 8. 3+ years of experience in Spring/Spring-Boot, microservices 2+ years of experience in Kafka Hands on experience on Spring Boot, Hibernate. Strong logical skills for code design and implementation. Writing high-performance, reliable, and maintainable code. Experience in designing streaming applications, developing, and delivering scalable solutions. Good knowledge of SQL, advanced data structures, design patterns, object-oriented principles. Should be well versed with Java 8. Good to have: Experience in Docker, Kubernetes. Experience in NO-SQL databases like Elastic Search, Cassandra etc. Experience in stream processing with Kafka and related open-source tools/technologies. Experience in Apache Flink, Siddhi queries. Knowledge of Security log sources. Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Design, develop, and operate high scale applications across the full engineering stack Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Manage sole project priorities, deadlines, and deliverables. Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Show more Show less

Posted 3 days ago

Apply

8.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Data Architect Location: Chennai Work Type: Onsite Position Description: Materials Management Platform (MMP) is a multi-year transformation initiative aimed at transforming the client's Materials Requirement Planning & Inventory Management capabilities. This is part of a larger Industrial Systems IT Transformation effort. This position responsibility is to design & deploy Data Centric Architecture in GCP for Materials Management platform which would get / give data from multiple applications modern & Legacy in Product Development, Manufacturing, Finance, Purchasing, N-Tier Supply Chain, Supplier Collaboration Skills Required: Data Architecture, GCP Skills Preferred: Cloud Architecture Experience Required: 8 to 12 years Experience Preferred: Requires a bachelor's or foreign equivalent degree in computer science, information technology or a technology related field 8 years of professional experience in: Data engineering, data product development and software product launches At least three of the following languages: Java, Python, Spark, Scala, SQL and experience performance tuning. 4 years of cloud data/software engineering experience building scalable, reliable, and cost-effective production batch and streaming data pipelines using: Data warehouses like Google BigQuery. Workflow orchestration tools like Airflow. Relational Database Management System like MySQL, PostgreSQL, and SQL Server. Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub o Microservices architecture to deliver large-scale real-time data processing application. REST APIs for compute, storage, operations, and security. DevOps tools such as Tekton, GitHub Actions, Git, GitHub, Terraform, Docker. Project management tools like Atlassian JIRA Automotive experience is preferred Support in an onshore/offshore model is preferred Excellent at problem solving and prevention. Knowledge and practical experience of agile delivery Education Required: Bachelor's Degree Education Preferred: Certification Program TekWissen® Group is an equal opportunity employer supporting workforce diversity. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Vadodara, Gujarat, India

On-site

Linkedin logo

Work as a senior member of a team responsible for developing large-scale, highly available and fault-tolerant next-generation SaaS solutions that are purpose-built for Health Care Analytics in accordance with established processes. Core Technical Skills Java Ecosystem Advanced Java development with extensive experience in Java 21 features Deep understanding of Spring Framework, Spring Boot, and Spring Security Expertise in implementing authentication, authorization, and secure coding practices Implement robust security measures using Spring Security Proficient in JBoss BPM Suite (jBPM) for business process automation Experience with microservices architecture Python & R Development (Good to Have) Python and/or R application development and scripting Integration of Python services with Java-based systems Data processing and analysis using Python and/or R libraries Key Responsibilities Architecture & Design Design and implement scalable, secure, and maintainable enterprise-level solutions Establish coding standards, best practices, and architectural guidelines Integrate business process management solutions with existing system Ensure system architectures align with business requirements and technology roadmap Technical Leadership Lead and mentor development teams in technical implementation Conduct code reviews and provide constructive feedback Evaluate and recommend new technologies and frameworks Drive technical decision-making and problem-solving Collaborate with stakeholders to understand requirements and propose solutions Additional Skills Strong problem-solving and analytical skills Excellent communication and leadership abilities Experience with CI/CD pipelines and DevOps practices Knowledge of cloud platforms (AWS, Azure, or GCP) Proven experience with Apache Kafka and event-driven architectures Solid understanding of Apache Spark and big data processing Understanding of containerization (Docker, Kubernetes) Experience with agile methodologies Database design and optimization skills Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Position : Senior Java Developer Location : Chennai Employment Type : Fulltime Key Responsibilities: Analyze and remediate security vulnerabilities in legacy Java web applications, with an emphasis on addressing Cross-Site Scripting (XSS) and other common issues. Refactor and update application codebases built with JSP, Servlets, and traditional Java frameworks. Upgrade outdated libraries and dependencies (e.g., Apache Commons, Spring Framework) using Maven or Gradle to mitigate known vulnerabilities. Ensure all changes align with secure coding standards and best practices. Collaborate with team members through Git, following established workflows including branching strategies and pull request reviews. Work independently or as part of a distributed team, effectively communicating progress and blockers. Required Qualifications: Hands-on experience in Java web application development. Proven expertise with JSP, Servlets, and legacy Java frameworks. Strong understanding of secure coding practices and common web vulnerabilities Experience with dependency management and upgrades using Maven or Gradle. Proficiency in Eclipse IDE for Java development. Solid knowledge and practical experience using Git for source control, including branching strategies and code reviews. Regards Patrick Fernandez Talent Acquisition Group - Strategic Recruitment Manager Show more Show less

Posted 3 days ago

Apply

1.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About Us Signa Tech is a rapidly growing company with a desire to transform the way people work by leveraging the power of IT. With a goal is to help companies simplify, streamline, make informed decisions, and breathe. Why you need to join us? At Signa Tech, we are not just looking for employees; we are seeking talented individuals who want to be part of a winning team. We are on the lookout for a passionate Data Engineer to join our exceptional team and play a pivotal role of our data driven journey. What We Are Looking For Participate in creating systems that help ingest data from various sources. Write efficient and well-documented code under supervision. Help in the scheduling and monitoring of data pipelines. Assist in writing queries for analysis under guidance. Collaborate with teams to understand requirements and effectively communicate task details. Develop and manage Power BI dashboards and reports. Create complex SQL queries to pull, analyse, and interpret data from various sources. Work closely with data architects, data scientists, and other stakeholders to ensure optimal data solutions. Stay updated with the latest industry trends and best practices. Requirements Fresher or 1 year of experience or internship in the related field is preferable. Basic understanding or exposure to BI design concepts. Eagerness to learn about delivering data science components. Willingness to collaborate with product and engineering teams. Good communication skills for team-based work. Ability to work with guidance and follow workflows. Interest or basic knowledge in dashboard, data visualization software/AI tools, such as PowerBI, Tableau, Splunk, Apache Spark is a plus Perks: SignaTech provides individuals with a competitive salary, a full range of benefits, chances for professional advancement, and a dynamic work environment that values creativity and teamwork. Join us and be part of something great at SignaTech Skills: data,teams,architects,collaboration,power bi,sql,apache,data ingestion,data visualization,code,data pipelines,splunk,data analysis,apache spark,communication,tableau Show more Show less

Posted 3 days ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are seeking a talented individual to join our Technology team at Mercer. This role will be based in Gurugram. This is a hybrid role that has a requirement of working at least three days a week in the office. Role: Senior Devops Engineer We are looking for an ideal candidate with minimum 6 years of experience in Devops. The candidate should have strong and deep understanding of Amazon Web Services (AWS) & Devops tools like Terraform, Ansible, Jenkins. Location: Gurgaon Functional Area: Engineering Education Qualification: Graduate/ Postgraduate Experience: 6-9 Years We will count on you to: Deploy infrastructure on AWS cloud using Terraform Deploy updates and fixes Build tools to reduce occurrence of errors and improve customer experience Perform root cause analysis of production errors and resolve technical issues Develop scripts to automation Troubleshooting and maintenance What you need to have: 6+ years of technical experience in devops area. Knowledge of the following technologies and applications: AWS Terraform Linux Administration, Shell Script Ansible CI Server: Jenkins Apache/Nginx/Tomcat Good to have Experience in following technologies: Python What makes you stand out: Excellent verbal and written communication skills, comfortable interfacing with business users Good troubleshooting and technical skills Able to work independently Why join our team: We help you be your best through professional development opportunities, interesting work and supportive leaders. We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities. Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to enhance your well-being. Mercer, a business of Marsh McLennan (NYSE: MMC), is a global leader in helping clients realize their investment objectives, shape the future of work and enhance health and retirement outcomes for their people. Marsh McLennan is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses: Marsh, Guy Carpenter, Mercer and Oliver Wyman . With annual revenue of $23 billion and more than 85,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit mercer.com , or follow on LinkedIn and X. Mercer Assessments business, one of the fastest-growing verticals within the Mercer brand, is a leading global provider of talent measurement and assessment solutions. As part of Mercer, the world’s largest HR consulting firm and a wholly owned subsidiary of Marsh McLennan—we are dedicated to delivering talent foresight that empowers organizations to make informed, critical people decisions. Leveraging a robust, cloud-based assessment platform, Mercer Assessments partners with over 6,000 corporations, 31 sector skill councils, government agencies, and more than 700 educational institutions across 140 countries. Our mission is to help organizations build high-performing teams through effective talent acquisition, development, and workforce transformation strategies. Our research-backed assessments, advanced technology, and comprehensive analytics deliver transformative outcomes for both clients and their employees. We specialize in designing tailored assessment solutions across the employee lifecycle, including pre-hire evaluations, skills assessments, training and development, certification exams, competitions and more. At Mercer Assessments, we are committed to enhancing the way organizations identify, assess, and develop talent. By providing actionable talent foresight, we enable our clients to anticipate future workforce needs and make strategic decisions that drive sustainable growth and innovation. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title : Senior Automation Engineer Job Type : Full-time, Contractor About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary: We are seeking a detail-oriented and innovative Senior Automation Engineer to join our customer's team. In this critical role, you will design, develop, and execute automated tests to ensure the quality, reliability, and integrity of data within Databricks environments. If you are passionate about data quality, thrive in collaborative environments, and excel at both written and verbal communication, we'd love to meet you. Key Responsibilities: Design, develop, and maintain robust automated test scripts using Python, Selenium, and SQL to validate data integrity within Databricks environments. Execute comprehensive data validation and verification activities to ensure accuracy and consistency across multiple systems, data warehouses, and data lakes. Create detailed and effective test plans and test cases based on technical requirements and business specifications. Integrate automated tests with CI/CD pipelines to facilitate seamless and efficient testing and deployment processes. Work collaboratively with data engineers, developers, and other stakeholders to gather data requirements and achieve comprehensive test coverage. Document test cases, results, and identified defects; communicate findings clearly to the team. Conduct performance testing to ensure data processing and retrieval meet established benchmarks. Provide mentorship and guidance to junior team members, promoting best practices in test automation and data validation. Required Skills and Qualifications: Strong proficiency in Python, Selenium, and SQL for developing test automation solutions. Hands-on experience with Databricks, data warehouse, and data lake architectures. Proven expertise in automated testing of data pipelines, preferably with tools such as Apache Airflow, dbt Test, or similar. Proficient in integrating automated tests within CI/CD pipelines on cloud platforms (AWS, Azure preferred). Excellent written and verbal communication skills with the ability to translate technical concepts to diverse audiences. Bachelor’s degree in Computer Science, Information Technology, or a related discipline. Demonstrated problem-solving skills and a collaborative approach to teamwork. Preferred Qualifications: Experience with implementing security and data protection measures in data-driven applications. Ability to integrate user-facing elements with server-side logic for seamless data experiences. Demonstrated passion for continuous improvement in test automation processes, tools, and methodologies. Show more Show less

Posted 3 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Job Summary: We are seeking a highly skilled Lead Data Engineer/Associate Architect to lead the design, implementation, and optimization of scalable data architectures. The ideal candidate will have a deep understanding of data modeling, ETL processes, cloud data solutions, and big data technologies. You will work closely with cross-functional teams to build robust, high-performance data pipelines and infrastructure to enable data-driven decision-making. Experience: 7 - 12 years Work Location: Hyderabad (Hybrid) / Remote Mandatory skills: AWS, Python, SQL, Airflow, DBT Must have done 1 or 2 projects in Clinical Domain/Clinical Industry. Responsibilities: Design and Develop scalable and resilient data architectures that support business needs, analytics, and AI/ML workloads. Data Pipeline Development: Design and implement robust ETL/ELT processes to ensure efficient data ingestion, transformation, and storage. Big Data & Cloud Solutions: Architect data solutions using cloud platforms like AWS, Azure, or GCP, leveraging services such as Snowflake, Redshift, BigQuery, and Databricks. Database Optimization: Ensure performance tuning, indexing strategies, and query optimization for relational and NoSQL databases. Data Governance & Security: Implement best practices for data quality, metadata management, compliance (GDPR, CCPA), and security. Collaboration & Leadership: Work closely with data engineers, analysts, and business stakeholders to translate business requirements into scalable solutions. Technology Evaluation: Stay updated with emerging trends, assess new tools and frameworks, and drive innovation in data engineering. Required Skills: Education: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. Experience: 7 - 12+ years of experience in data engineering Cloud Platforms: Strong expertise in AWS data services. Databases: Hands-on experience with SQL, NoSQL, and columnar databases such as PostgreSQL, MongoDB, Cassandra, and Snowflake. Programming: Proficiency in Python, Scala, or Java for data processing and automation. ETL Tools: Experience with tools like Apache Airflow, Talend, DBT, or Informatica. Machine Learning & AI Integration (Preferred): Understanding of how to architect data solutions for AI/ML applications Show more Show less

Posted 3 days ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We are looking for a highly skilled and motivated Software Engineer with a strong background in implementing PriceFX solutions and deep expertise in Groovy scripting, Java, JavaScript, and Apache Camel . In this role, you will be responsible for delivering scalable pricing and integration solutions, contributing to digital transformation initiatives for global enterprises. Job Location – Hyderabad, Ahmedabad, and Indore India. Key Responsibilities Support the technical implementation of PriceFX modules including QuoteConfigurator, PriceBuilder, RebateManager, and others. Collaborate with senior engineers and business analysts to gather requirements and implement solutions Write and maintain Groovy scripts to implement custom business logic and calculations within PriceFX. Develop, and maintain integrations using Apache Camel, REST APIs, and other middleware tools. Develop backend components and services using Java and frontend elements using JavaScript when required. Create and maintain technical documentation, best practices, and reusable assets. What You’ll Bring: Bachelor’s degree in computer science, Engineering, or a related field, or equivalent work experience. Mandatory Skills 2+ years of experience in PriceFX implementation. Proficient in Groovy scripting and PriceFX calculation logic setup. Hands-on experience in Java (Spring Boot preferred). Experience with Apache Camel for integration flows and routing. Solid understanding of JavaScript for light UI customization or scripting needs. Familiarity with RESTful APIs, JSON, XML, and third-party system integrations. Good understanding of pricing processes and enterprise software implementation. Strong problem-solving skills and attention to detail. Excellent communication and documentation skills. Preferred Skills (Nice to Have): Experience working with cloud platforms (AWS, Azure, or GCP). Exposure to CI/CD, Git, and containerization (Docker/Kubernetes). Background in enterprise pricing, CPQ, or revenue management platforms. Experience in Agile/Scrum development environments. Show more Show less

Posted 3 days ago

Apply

4.0 - 5.11 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Position Title: Sr. Java Developer Web Openings Experience Level 4 to 5.11 Years - 1 Developer 6 to 7.11 Years - 1 Developer The Position is at BFSI Domain Client based out of Kanjurmarg (Mumbai). The Client is a market leader in its Domain. Selected Candidates with be working on Cutting edge Technologies as Client is looking for Dynamic, Hardworking Committed Candidates. Qualification B.E/ B. Tech /MTech /MCA. Key Responsibilities Developing, releasing, and supporting java based multi-tier robust web Application and standalone systems. Deliver across the entire app life cycle, design, build, deploy, test, release and support. Working directly with developers and product managers to conceptualize, build, test and realise products. Work on bug fixing and improving application performance in coordination with QA team Continuously discover, evaluate, and implement new technologies to maximize development efficiency Optimizing performance for the apps and keep up to date on the latest industry trends in the emerging technologies. Required Skills Candidate should have experience in developing applications using JAVA/J2EE programming skills with sound understanding of Java 8-17. Strong proficiency in back-end language (Java), Java frameworks (Spring Boot, Spring MVC) and JavaScript frameworks (Angular, AngularJS), Kafka. Strong JS skills on jQuery, HTML and CSS, Strong understanding and experience with Microservices Experience working with RDBMS concepts, SQL syntaxes and complex query processing and optimization (e.g. PostgreSQL, Oracle) Exposure to handling and configuring Web servers (e.g. Apache) and UI/UX design. Strong understanding of object-oriented programming (OOP) concepts and design patterns Experience in web services and clear understanding of RESTful APIs to connect to back-end services. Excellent problem-solving skills, with the ability to debug and troubleshoot code issues Strong communication and teamwork skills, with the ability to work collaboratively with cross functional team Selection Procedure Face to Face round of interview at Greysoft office. Virtual round of interview by Client. Machine Test. (Client Location) Joining Period Immediate to 15 days. Interested candidate can email their updated resume on recruiter@greysoft.in This job is provided by Shine.com Show more Show less

Posted 3 days ago

Apply

4.0 - 5.11 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Position Title: Sr. Java Developer Core Experience Level 4 to 5.11 Years 1 6 to 7.11 Years 1 The Position is at BFSI Domain Client based out of Kanjurmarg (Mumbai). The Client is a market leader in its Domain. Selected Candidates with be working on Cutting edge Technologies as Client is looking for Dynamic, Hardworking Committed Candidates. Qualification B.E/ B. Tech /MTech /MCA. Key Responsibilities Conceptualizing, Developing, releasing, and supporting java based multi-tier robust Web Application and standalone systems. Deliver across the entire app life cycle, design, build, deploy, test, release and support. Optimizing performance for the Systems and Continuously discover, evaluate, and implement emerging technologies to maximize development efficiency Working directly with developers and product managers to conceptualize, build, test and realise products. Work on bug fixing and improving application performance in coordination with QA team. Required Skills Strong knowledge of Java 8 -17 including Collections framework and data structures, multithreading and concurrency management, memory management, Kafka, request queuing, NIO, IO, TCP/IP, file system. Candidate should have experience in developing applications using JAVA/J2EE programming skills preferably with Real Time Response Systems. Strong proficiency in back-end language (Java), Java frameworks (Spring Boot, Spring MVC) Strong understanding and experience with Microservices Experience working with RDBMS concepts, SQL syntaxes and complex query processing and optimization (e.g. PostgreSQL, Oracle), in memory databases such as Redis, Memcache. Exposure to handling and configuring Web servers (e.g. Apache) and UI/UX design. Strong understanding of object-oriented programming (OOP) concepts and design patterns Excellent problem-solving skills, with the ability to debug and troubleshoot code issues Strong communication and teamwork skills, with the ability to work collaboratively with cross functional teams. Selection Procedure Face to Face round of interview at Greysoft office. Virtual round of interview by Client. Machine Test. (Client Location) Joining Period Immediate to 15 days. Interested candidate can email their updated resume on recruiter@greysoft.in This job is provided by Shine.com Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Experience Range - 10 yrs Location : Pune/Remote Qualification : BE/BTech/MCA/MTech (Preferably CS/IT) Technical Skills Required Mandatory: Good knowledge in Networking and troubleshooting tools- DNS, DHCP, TLS, SSL, security Protocols, Routing, Packet data analysing, Prior experience in working with Wireshark, Nmap, http analyser, Debug view etc. Knowledge in VAPT analysis & Security knowledge about security software such as DLP, firewalls (End point security are add on) Product and Application Support Good experience in product and application support with sound knowledge of networking and IT Infrastructure Must have worked on supporting any enterprise security applications like zero trust, Identity Management solution, Multifactor Authentication Solution Any support experience in Virtualization products coming from Citrix, Microsoft, Dell, etc. Should have worked with any reverse proxy solutions Should understand how key web servers can be troubleshooted like Apache, NGINX, TOMCAT, IIS, etc. OWASP Application Security Guidelines How typically big enterprise support product installation and upgrades are managed and how the patch management is done Knowledge of Power-shell scripting, Linux shell scripting, and Python Infra Support Excellent knowledge in Windows Server operating systems & Roles - Active directory, Group policies, Remote Desktop services, IIS, FSMO roles. Process data analyzing, Windows sys- internals tools knowledge will be add on. Batch and PowerShell scripting will be desirable Work experience in Client-side operating systems - Windows 7,8,10 are must Very good Working knowledge in Linux & Mac operating systems Support Management and Tools knowledge Good knowledge of L1 and L2 Ticket tracking tools Good Knowledge of Service level management tools Should be able to manage escalations and the agreed and provided SLA for various clients Should be able to provide reports for any escalations, Root cause Analysis (RCA) , Productivity reports Must make sure escalations are managed at root level and there is zero repeat escalations Excellent knowledge on Server Operating systems (Win 2016/19/22, Linux flavors) Proficient in Networking - DNS, DHCP, basic routing concepts, network monitoring commands & tools, Good knowledge in IT Infrastructure & Security concepts -Storage, File servers, SSL certificates, VPNs gateways, VAPT analysis, UTMS etc. Good knowledge in Azure Cloud, conceptual understanding in Desktop as service, working experience in Azure Virtual Desktop / equivalent products Role and Responsibilities: To provide solutions, not workarounds Good listener to customer, provide on time deliveries; Involve appropriate authorities when escalations are required Make sure Support deliveries are under SLAs Provide Solution documents, KB articles & RCAs and make sure team members are following the process Proactively involve in escalations and make sure customer commitments are met Coordinate with Product Management team for bug fixes, new feature escalations & development related items and make sure on time resolution Good with Statistical data, analyze priorities and involve in the product improvement discussions work as a leader of special or Ongoing requirements Use appropriate judgement during critical environments. Reproduce customer issues and if required, analyze the root cause; Check and verify any viable solutions available other than development – such as creating scripts, simple solutions etc. Good to have: Knowledge of Windows kernel Drivers Kubernetes and Container technologies Prior experience in support ticketing tools and process Experience in documentations Certifications - ITIL3 or ITIL4 Soft Skills Required Strong communication skills (written and Verbal) Clarity of thought User centric approach Sincere Proactive Self-motivated Logical bent of mind (Analytical) Team Manager Flexible/adaptable Strong verbal communication skills Accops empowers modern enterprises with agility, flexibility, and affordability by providing secure and instant remote access to business applications from any device and network. Founded in October 2012, Accops is headquartered in Pune, India, and is known for its nimble and customizable approach, offering faster response times to dynamic environments. We are a rapidly growing IT product company with a flat organizational structure and flexible work environment. We enable enterprises to adopt 'work from anywhere' and by joining us, you get to work on hypergrowth technologies, like virtualization, cloud computing and network security. 𝘈𝘤𝘤𝘰𝘱𝘴 𝘪𝘴 𝘢𝘯 𝘦𝘲𝘶𝘢𝘭 𝘰𝘱𝘱𝘰𝘳𝘵𝘶𝘯𝘪𝘵𝘺 𝘦𝘮𝘱𝘭𝘰𝘺𝘦𝘳 𝘤𝘰𝘮𝘮𝘪𝘵𝘵𝘦𝘥 𝘵𝘰 𝘣𝘶𝘪𝘭𝘥𝘪𝘯𝘨 𝘢 𝘤𝘶𝘭𝘵𝘶𝘳𝘦 𝘸𝘩𝘦𝘳𝘦 𝘢𝘭𝘭 𝘦𝘮𝘱𝘭𝘰𝘺𝘦𝘦𝘴 𝘢𝘳𝘦 𝘷𝘢𝘭𝘶𝘦𝘥, 𝘳𝘦𝘴𝘱𝘦𝘤𝘵𝘦𝘥 𝘢𝘯𝘥 𝘰𝘱𝘪𝘯𝘪𝘰𝘯𝘴 𝘤𝘰𝘶𝘯𝘵. 𝘞𝘦 𝘦𝘯𝘤𝘰𝘶𝘳𝘢𝘨𝘦 𝘢𝘱𝘱𝘭𝘪𝘤𝘢𝘵𝘪𝘰𝘯𝘴 𝘧𝘳𝘰𝘮 𝘢𝘭𝘭 𝘴𝘶𝘪𝘵𝘢𝘣𝘭𝘺 𝘲𝘶𝘢𝘭𝘪𝘧𝘪𝘦𝘥 𝘱𝘦𝘳𝘴𝘰𝘯𝘴 𝘪𝘳𝘳𝘦𝘴𝘱𝘦𝘤𝘵𝘪𝘷𝘦 𝘰𝘧, 𝘣𝘶𝘵 𝘯𝘰𝘵 𝘭𝘪𝘮𝘪𝘵𝘦𝘥 𝘵𝘰, 𝘵𝘩𝘦𝘪𝘳 𝘨𝘦𝘯𝘥𝘦𝘳 𝘰𝘳 𝘨𝘦𝘯𝘦𝘵𝘪𝘤 𝘪𝘯𝘧𝘰𝘳𝘮𝘢𝘵𝘪𝘰𝘯, 𝘴𝘦𝘹𝘶𝘢𝘭 𝘰𝘳𝘪𝘦𝘯𝘵𝘢𝘵𝘪𝘰𝘯, 𝘦𝘵𝘩𝘯𝘪𝘤𝘪𝘵𝘺, 𝘳𝘦𝘭𝘪𝘨𝘪𝘰𝘯, 𝘴𝘰𝘤𝘪𝘢𝘭 𝘴𝘵𝘢𝘵𝘶𝘴, 𝘮𝘦𝘥𝘪𝘤𝘢𝘭 𝘤𝘢𝘳𝘦 𝘭𝘦𝘢𝘷𝘦 𝘳𝘦𝘲𝘶𝘪𝘳𝘦𝘮𝘦𝘯𝘵𝘴, 𝘱𝘰𝘭𝘪𝘵𝘪𝘤𝘢𝘭 𝘢𝘧𝘧𝘪𝘭𝘪𝘢𝘵𝘪𝘰𝘯, 𝘱𝘦𝘰𝘱𝘭𝘦 𝘸𝘪𝘵𝘩 𝘥𝘪𝘴𝘢𝘣𝘪𝘭𝘪𝘵𝘪𝘦𝘴, 𝘤𝘰𝘭𝘰𝘳, 𝘯𝘢𝘵𝘪𝘰𝘯𝘢𝘭 𝘰𝘳𝘪𝘨𝘪𝘯, 𝘷𝘦𝘵𝘦𝘳𝘢𝘯 𝘴𝘵𝘢𝘵𝘶𝘴, 𝘦𝘵𝘤. 𝘞𝘦 𝘤𝘰𝘯𝘴𝘪𝘥𝘦𝘳 𝘢𝘭𝘭 𝘢𝘱𝘱𝘭𝘪𝘤𝘢𝘵𝘪𝘰𝘯𝘴 𝘣𝘢𝘴𝘦𝘥 𝘰𝘯 𝘮𝘦𝘳𝘪𝘵 𝘢𝘯𝘥 𝘴𝘶𝘪𝘵𝘢𝘣𝘪𝘭𝘪𝘵𝘺 𝘵𝘰 𝘵𝘩𝘦 𝘳𝘰𝘭𝘦. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Job Title: Cloud Platform Engineer / Senior Engineer / Architect (Full-Time or Contract-based – Flexible Engagement) Location: Gurugram / Remote (Hybrid models considered) About Fluidech Fluidech, an Esconet group company and a deemed public company, is a technology consulting and managed services firm specialising in cybersecurity. Founded in 2014 and headquartered in Gurugram, and today with a client base spanning over 100 organizations worldwide, Fluidech designs IT solutions aligned with business objectives, fostering trusted relationships and delivering measurable performance improvements. Established as a born-in-the-cloud company, Fluidech has evolved into a trusted technology partner that helps businesses build (Cloud & Infrastructure), automate (DevOps), and secure (Cyber Security services). Our solutions span diverse industry verticals, aligned with each client’s business goals. In addition to holding ISO 9001 and ISO 27001 certifications, and an award-winning cybersecurity team, the company has a strong value proposition in its GRC services across frameworks including but not limited to NCIIPC's CAF, SEBI's CSCRF, and others. Role Overview We are looking for a highly skilled Cloud Platform Engineer / Architect to help Fluidech design and build a secure, scalable, and efficient Private Cloud platform , using open-source cloud infrastructure platforms such as OpenNebula, Apache CloudStack , or similar. This individual will play a hands-on technical leadership role in architecting and deploying our internal cloud ecosystem — enabling compute, storage, and network virtualization, automation, self-service, and orchestration features. Key Responsibilities Design, architect, and implement a private cloud platform from the ground up. Evaluate and choose appropriate cloud stack technologies (OpenNebula, Apache CloudStack, Proxmox, etc.). Build and maintain cloud orchestration, provisioning, and resource management systems. Integrate storage, networking, compute resources across virtualized infrastructure. Define and implement cloud security standards and access controls. Collaborate with DevOps, Infrastructure, and Security teams to align the platform with operational needs. Develop self-service portals and automation pipelines for provisioning workloads. Monitor system performance, reliability, and scalability; proactively identify and address issues. Document the entire architecture and design, with handover or operationalization steps. Required Skills & Experience Proven experience in building and managing private cloud platforms using OpenNebula, Apache CloudStack, or equivalent. Strong expertise in virtualization platforms (KVM, Xen, VMware, etc.). Solid understanding of networking , storage technologies , and hypervisor management . Experience with Linux system administration , shell scripting , and infrastructure automation (e.g., Ansible, Terraform). Familiarity with cloud orchestration , multi-tenancy , resource quotas , and metering . Knowledge of cloud security principles, access control, and monitoring. Exposure to containers , Kubernetes , or hybrid cloud environments is a plus. Ability to evaluate trade-offs between open-source and commercial solutions. Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. Relevant certifications (e.g., OpenNebula Certified Professional, Apache CloudStack certifications, Linux Foundation Certified SysAdmin, RHCSA/RHCE). Engagement Flexibility We are open to both full-time employment or contract-based engagement, depending on the candidate’s availability and interest. Remote work options available, but must be accessible for collaboration with internal teams during business hours. Why Join Fluidech Work on cutting-edge cloud and cybersecurity solutions. Opportunity to architect core technology platforms from the ground up. Flexible working models and high ownership culture. Collaborate with a fast-growing, award-winning technology team. Show more Show less

Posted 3 days ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are seeking a highly skilled Senior Technical Architect with expertise in Databricks, Apache Spark, and modern data engineering architectures. The ideal candidate will have a strong grasp of Generative AI and RAG pipelines and a keen interest (or working knowledge) in Agentic AI systems. This individual will lead the architecture, design, and implementation of scalable data platforms and AI-powered applications for our global clients. This high-impact role requires technical leadership, cross-functional collaboration, and a passion for solving complex business challenges with data and AI. Responsibilities Lead architecture, design, and deployment of scalable data solutions using Databricks and the medallion architecture. Guide technical teams in building batch and streaming data pipelines using Spark, Delta Lake, and MLflow. Collaborate with clients and internal stakeholders to understand business needs and translate them into robust data and AI architectures. Design and prototype Generative AI applications using LLMs, RAG pipelines, and vector stores. Provide thought leadership on the adoption of Agentic AI systems in enterprise environments. Mentor data engineers and solution architects across multiple projects. Ensure adherence to security, governance, performance, and reliability best practices. Stay current with emerging trends in data engineering, MLOps, GenAI, and agent-based systems. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or related technical discipline. 10+ years of experience in data architecture, data engineering, or software architecture roles. 5+ years of hands-on experience with Databricks, including Spark SQL, Delta Lake, Unity Catalog, and MLflow. Proven experience in designing and delivering production-grade data platforms and pipelines. Exposure to LLM frameworks (OpenAI, Hugging Face, LangChain, etc.) and vector databases (FAISS, Weaviate, etc.). Strong understanding of cloud platforms (Azure, AWS, or GCP), particularly in the context of Databricks deployment. Knowledge or interest in Agentic AI frameworks and multi-agent system design is highly desirable. Technical Skills Databricks (incl. Spark, Delta Lake, MLflow, Unity Catalog) Python, SQL, PySpark GenAI tools and libraries (LangChain, OpenAI, etc.) CI/CD and DevOps for data REST APIs, JSON, data serialization formats Cloud services (Azure/AWS/GCP) Soft Skills Strong communication and stakeholder management skills Ability to lead and mentor diverse technical teams Strategic thinking with a bias for action Comfortable with ambiguity and iterative development Client-first mindset and consultative approach Excellent problem-solving and analytical skills Preferred Certifications Databricks Certified Data Engineer / Architect Cloud certifications (Azure/AWS/GCP) Any certifications in AI/ML, NLP, or GenAI frameworks are a plus Show more Show less

Posted 3 days ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description We are seeking a highly skilled Senior Technical Architect with expertise in Databricks, Apache Spark, and modern data engineering architectures. The ideal candidate will have a strong grasp of Generative AI and RAG pipelines and a keen interest (or working knowledge) in Agentic AI systems. This individual will lead the architecture, design, and implementation of scalable data platforms and AI-powered applications for our global clients. This high-impact role requires technical leadership, cross-functional collaboration, and a passion for solving complex business challenges with data and AI. Key Responsibilities Lead architecture, design, and deployment of scalable data solutions using Databricks and the medallion architecture. Guide technical teams in building batch and streaming data pipelines using Spark, Delta Lake, and MLflow. Collaborate with clients and internal stakeholders to understand business needs and translate them into robust data and AI architectures. Design and prototype Generative AI applications using LLMs, RAG pipelines, and vector stores. Provide thought leadership on the adoption of Agentic AI systems in enterprise environments. Mentor data engineers and solution architects across multiple projects. Ensure adherence to security, governance, performance, and reliability best practices. Stay current with emerging trends in data engineering, MLOps, GenAI, and agent-based systems. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or related technical discipline. 10+ years of experience in data architecture, data engineering, or software architecture roles. 5+ years of hands-on experience with Databricks, including Spark SQL, Delta Lake, Unity Catalog, and MLflow. Proven experience in designing and delivering production-grade data platforms and pipelines. Exposure to LLM frameworks (OpenAI, Hugging Face, LangChain, etc.) and vector databases (FAISS, Weaviate, etc.). Strong understanding of cloud platforms (Azure, AWS, or GCP), particularly in the context of Databricks deployment. Knowledge or interest in Agentic AI frameworks and multi-agent system design is highly desirable. Technical Skills Databricks (incl. Spark, Delta Lake, MLflow, Unity Catalog) Python, SQL, PySpark GenAI tools and libraries (LangChain, OpenAI, etc.) CI/CD and DevOps for data REST APIs, JSON, data serialization formats Cloud services (Azure/AWS/GCP) Soft Skills Strong communication and stakeholder management skills Ability to lead and mentor diverse technical teams Strategic thinking with a bias for action Comfortable with ambiguity and iterative development Client-first mindset and consultative approach Excellent problem-solving and analytical skills Preferred Certifications Databricks Certified Data Engineer / Architect Cloud certifications (Azure/AWS/GCP) Any certifications in AI/ML, NLP, or GenAI frameworks are a plus Show more Show less

Posted 3 days ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Our Mission At Palo Alto Networks® everything starts and ends with our mission: Being the cybersecurity partner of choice, protecting our digital way of life. Our vision is a world where each day is safer and more secure than the one before. We are a company built on the foundation of challenging and disrupting the way things are done, and we’re looking for innovators who are as committed to shaping the future of cybersecurity as we are. Who We Are We take our mission of protecting the digital way of life seriously. We are relentless in protecting our customers and we believe that the unique ideas of every member of our team contributes to our collective success. Our values were crowdsourced by employees and are brought to life through each of us everyday - from disruptive innovation and collaboration, to execution. From showing up for each other with integrity to creating an environment where we all feel included. As a member of our team, you will be shaping the future of cybersecurity. We work fast, value ongoing learning, and we respect each employee as a unique individual. Knowing we all have different needs, our development and personal wellbeing programs are designed to give you choice in how you are supported. This includes our FLEXBenefits wellbeing spending account with over 1,000 eligible items selected by employees, our mental and financial health resources, and our personalized learning opportunities - just to name a few! At Palo Alto Networks, we believe in the power of collaboration and value in-person interactions. This is why our employees generally work full time from our office with flexibility offered where needed. This setup fosters casual conversations, problem-solving, and trusted relationships. Our goal is to create an environment where we all win with precision. Job Description Your Career The Engineering TAC (ETAC) Advanced Solutions team is an exciting crossroads between Technical Assistance Center (TAC) and Engineering. This team is uniquely empowered to drive decisions and to be the thought leaders within the Global Customer Support organization at Palo Alto Networks. We are a relatively small, global team consisting of top performers with support, engineering and development backgrounds. Our roles are very hands-on and have a high impact on the company. The Advanced Solutions team role also includes building/architecting robust environments to assist with complex issue reproduction/resolution, as well as large-scale, cross-platform lab buildouts for feature testing, software release, etc… Our Team consists of Engineers who are experienced in Network Engineering, NetSec, QA, Software Development/DevOps, Cloud, as well as SME’s in areas for bleeding edge tools such as Ixia/Keysight, Spirent, etc… Team's Mission includes Application and Tools Development, AI/Machine Learning R&D, DB Systems Administration, Release Management, and Data Analytics. You will network and collaborate with key stakeholders within Global Support, Engineering, QA, PM, Sales, and more, leveraging your capability of detailing difficult technical issues to both non-technical and technical professionals. Your Impact An ETAC engineer has the highest level of expertise amongst support teams, and is responsible for staying up to date with technical details on Palo Alto Networks new products and industry in general Work with TAC to provide expert-level technical support of customer issues that involve very complex network topologies, architectures, and security designs Lead technical discussions with cross-functional teams, fostering an environment of transparency that ultimately leads to better products. Develop advanced troubleshooting focused tools and scripts to help solve complex customer issues and improve product supportability Help drive and enable ML/AI related projects Own critical and executive level issues partnering primarily with Customer Support and Engineering to provide expertise in identifying and resolving customer issues, which entails working with the TAC case owner and Engineering on a replication or verification and communicating updates Lead in Identifying problems and taking actions to fix them across support and product life cycles Develop and deliver expert level training materials for TAC support, Engineering, and Professional Services teams Ownership of Release Management: Assist with managing the end-to-end release process, including coordinating with various teams to gather release requirements and dependencies. Responsible for scheduling, planning, and controlling the software delivery process for on-prem and cloud products (CSP/Adminsite/AWS/Azure/OCI/GCP) Coordinate with IT/Dev/QA to assure IT requirements are met for a seamless release process SW release after completing testing/deployment stage Define strategic usage of release management tools (Autoex/Jenkins/Automation Staging Scripts) Collaborate on product development with cross-functional teams including Engineering/QA/PM Triage Production issues impacting customer deliverables on Palo Alto Networks Support Portal Qualifications Your Experience Minimum of 7 years of professional experience Technical Support or Development experience supporting enterprise customers with very complex LAN/WAN environments Deep understanding of TCP/IP and advanced knowledge of LAN/WAN technologies, expertise with general routing/switching, Routing protocols (e.g. BGP, OSPF, Multicast), branch and DataCenter Architectures Expertise with Remote Access VPN solutions, IPSEC, PKI & SSL Expertise with Cloud services and Infrastructure a plus Familiarity with C, Python, or at least one scripting language - While this is not a role to be a developer, one should have some experience in automating moderately complex tasks. Experience with Palo Alto Networks products is highly desired Understand how data packets get processed - Devices shouldn’t be a “black box”, one should have an understanding of packet processing at various stages and how that can result in different symptoms/outcomes. Excellent communication skills with the ability to deliver highly technical informative presentations - While you will not be involved with taking call from a queue, there may be cases where your expertise is called upon to speak with customers from time to time, along with Support members, Developers, Sales Engineers and the rest of your team Proficiency in creating technical documentation using applications such as Powerpoint/Google Slides or knowledge-base/intranet platforms such as Lumapps, Jive or Confluence Familiar with automation with Jenkins, Terraform, etc. Understanding of Linux operating systems. Able to operate headless Linux systems and Shell scripting. Basic knowledge of deploying and configuring web servers, i.e Nginx, Apache, IIS. Understanding of load balancing technologies and HTTP forwarding with Nginx, HaProxy, and load balancers provided by AWS, Azure, and Google Cloud. Familiarity with virtualization technologies including VMware, KVM, OpenStack, AWS, Google Cloud and Azure. Familiarity with Docker. Able to create, manage, and deploy docker images on Docker server. Manage running containers. Create docker-compose YAML files. Familiar with front-end technologies including JavaScript, React, HTML, and CSS for building responsive, user-friendly interfaces. Experienced in back-end development using frameworks such as Python and Flask Brings a creative and hands-on approach to testing and enhancing small applications, participating in all aspects of the testing lifecycle—from functional and performance testing to idea generation and continuous monitoring—with a focus on improvement and efficacy to ensure optimal quality and user satisfaction. Willing to work flexible times including occasional weekends and evenings. Additional Information The Team Our technical support team is critical to our success and mission. As part of this team, you enable customer success by providing support to clients after they have purchased our products. Our dedication to our customers doesn’t stop once they sign – it evolves. As threats and technology change, we stay in step to accomplish our mission. You’ll be involved in implementing new products, transitioning from old products to new, and will fix integrations and critical issues as they are raised – in fact, you’ll seek them out to ensure our clients are safely supported. We fix and identify technical problems, with a pointed focus of providing the best customer support in the industry. Our Commitment We’re problem solvers that take risks and challenge cybersecurity’s status quo. It’s simple: we can’t accomplish our mission without diverse teams innovating, together. We are committed to providing reasonable accommodations for all qualified individuals with a disability. If you require assistance or accommodation due to a disability or special need, please contact us at accommodations@paloaltonetworks.com. Palo Alto Networks is an equal opportunity employer. We celebrate diversity in our workplace, and all qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or other legally protected characteristics. All your information will be kept confidential according to EEO guidelines. Our Commitment We’re problem solvers that take risks and challenge cybersecurity’s status quo. It’s simple: we can’t accomplish our mission without diverse teams innovating, together. We are committed to providing reasonable accommodations for all qualified individuals with a disability. If you require assistance or accommodation due to a disability or special need, please contact us at accommodations@paloaltonetworks.com. Palo Alto Networks is an equal opportunity employer. We celebrate diversity in our workplace, and all qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or other legally protected characteristics. All your information will be kept confidential according to EEO guidelines. Show more Show less

Posted 3 days ago

Apply

2.0 years

0 Lacs

India

Remote

Linkedin logo

🌟 We’re Hiring: Customer Service Representatives & Support Managers 📍 Location : Remote 🕒 Employment Type : Contract-based / Freelance / Part-time – 1 Month 📅 Start Date : [Immediate] Are you passionate about delivering exceptional customer experiences and driving support excellence? Join our fast-paced, customer-obsessed team where you’ll play a critical role in shaping how we support users across multiple channels and platforms. 🔧 Key Responsibilities Respond to and resolve multichannel support tickets (email, chat, voice, social, etc.) Monitor and report key support KPIs and metrics (e.g., CSAT, FRT, ART, etc.) Update and maintain internal knowledge bases and help center documentation Handle customer escalations with professionalism and urgency Coach, mentor, and lead junior support agents to consistently meet quality standards Identify and implement process improvements to increase efficiency and customer satisfaction Collaborate with cross-functional teams (product, sales, QA) to relay customer insights 💻 Tools & Platforms You’ll Work With Commercial Support & CX Platforms: Zendesk, Freshdesk, Salesforce Service Cloud, ServiceNow HubSpot Service Hub, Intercom, Helpscout NICE IEX, Verint, Assembled RingCentral, Nextiva Tableau, Qualtrics, SurveyMonkey Slack, Microsoft Teams Open Source / Free Tools: Ticketing: osTicket, Zammad, Request Tracker, UVDesk, FreeScout Messaging: Rocket.Chat, Mattermost, Element, Jitsi Meet Documentation: DokuWiki, BookStack, MediaWiki, Outline Reporting & Analytics: Metabase, Apache Superset, Google Data Studio (free) Survey & Feedback: Google Forms, LimeSurvey ✅ What We’re Looking For 2+ years of experience in customer support or service delivery roles Strong verbal and written communication skills Proven ability to manage and resolve complex customer issues Familiarity with support automation, AI/chatbots, or workflow optimization is a plus Experience with both enterprise and open-source tools is an advantage Leadership or team coaching experience (for Support Manager applicants) Interested Please share your Profiles to Ganapathikumar@highbrowtechnology.com Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Mulshi, Maharashtra, India

On-site

Linkedin logo

Area(s) of responsibility Data Management (AWS) developer We are looking for a Data Management (AWS) developer who will serve as the technical counterpart to data stewards across various business domains. This role will focus on the technical aspects of data management, including the integration of data catalogs, data quality management, and access management frameworks within our data lakehouse. Key Responsibilities Integrate Acryl data catalog with AWS Glue data catalog to enhance data discoverability and management. Develop frameworks and processes for deploying and maintaining data classification and data quality rules in the data lakehouse. Implement and maintain Lake Formation access frameworks, including OpenID Connect (OIDC) for secure data access. Build and maintain data quality and classification reports and visualizations to support data-driven decision-making. Develop and implement mechanisms for column-level data lineage in the data lakehouse. Collaborate with data stewards to ensure effective data ownership, cataloging, and metadata management. Qualifications Relevant experience in data management, data governance, or related technical fields. Strong technical expertise in AWS services, particularly in AWS Glue, Lake Formation, and data quality management tools. Familiarity with data security practices, including OIDC and AWS IAM. Experience with AWS Athena, Apache Airflow. Relevant certifications (e.g., CDMP) are a plus. Terraform, Github, Python, Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

Exp:5+yrs NP: Imm-15 days Rounds: 3 Rounds (Virtual) Mandate Skills: Apache spark, hive, Hadoop, spark, scala, Databricks Job Description The Role Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Show more Show less

Posted 3 days ago

Apply

5.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job description: Job Description Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ͏ Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ͏ Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ͏ Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ͏ Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Apache Spark . Experience: 5-8 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome. Show more Show less

Posted 3 days ago

Apply

3.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Work experience: 3-6 years Budget is 7 Lac Max Notice period: Immediate to 30days. Linux Ø Install, configure, and maintain Linux servers (Red Hat, CentOS, Ubuntu, Amazon Linux). Ø Linux OS through Network and Kick Start Installation Ø Manage system updates, patch management, kernel upgrades. Ø Create and manage user accounts, file systems, permissions, and storage. Ø Write shell scripts (Bash, Python) for task automation. Ø Monitor server performance and troubleshoot hardware/software issues. Ø Handle incident management, root cause analysis, and preventive maintenance. Ø Implement and manage backup solutions (rsync, cron jobs, snapshot backups). Ø Harden servers by configuring firewalls (iptables, firewalld), securing SSH, and managing SELinux. Ø Configure and troubleshoot networking services (DNS, DHCP, FTP, HTTP, NFS, Samba). Ø Work on virtualization and cloud technologies (AWS EC2, VPC, S3, RDS basics if required). Ø Maintain detailed documentation of system configuration and procedures. Ø Implement and configure APACHE & Tomcat web server with open SSL on Linux. Ø SWAP Space Management. Ø LVM (extending, reducing, removing and merging), Backup and Restoration. Amazon Web Services Ø AWS Infrastructure Management : Provision and manage cloud resources like EC2, S3, RDS, VPC, IAM, EKS, Lambda. Ø Cloud Architecture : Design and implement secure, scalable, and reliable cloud solutions. Ø Automation and IaC : Automate deployments using tools like Terraform, CloudFormation, or AWS CDK. Ø Security Management : Configure IAM roles, security groups, encryption (KMS), and enforce best security practices. Ø Monitoring and Optimization : Monitor cloud resources with CloudWatch, X-Ray, and optimize for cost and performance. Ø Backup and Disaster Recovery : Set up data backups (S3, Glacier, EBS snapshots) and design DR strategies. Ø CI/CD Implementation : Build and maintain CI/CD pipelines using AWS services (CodePipeline, CodeBuild) or Jenkins, GitLab,GitHub. Ø Networking : Manage VPCs, Subnets, Internet Gateways, NAT, VPNs, Route53 DNS configurations. Ø Troubleshooting and Support : Identify and fix cloud resource issues, perform root cause analysis. Ø Migration Projects : Migrate on-premises servers, databases, and applications to AWS. Windows Server and Azure: Ø Active Directory: Implementation, Migration, Managing and troubleshooting. Ø Deep knowledge on DHCP Server Ø Deep knowledge in Patch management Ø Troubleshooting Windows operating System Ø Decent knowledge in Azure (Creation of VMs, configuring network rules, Migration, Managing and troubleshooting) Ø Deep knowledge in VMware ESXi (Upgrading the server firmware, creation of VMs, Managing backups, monitoring etc) Networking: Ø Knowledge on IP Addressing, NAT, P2P protocols, SSL and IPsec VPNS etc Ø Deep knowledge in VPN Ø Knowledge in MVoIP, VMs, SIP PRI and Lease Line. Ø Monitoring the Network bandwidth and maintaining the stability Ø Configuring Switch and Routers Ø Troubleshooting Network Devices Ø Must be able to work on Cisco Meraki Access Point devices Firewall & Endpoint Security: Ø Decent knowledge in Fortinet Firewalls which includes creating Objects, Routing, creating Rules and monitoring etc. Ø Decent knowledge in CrowdStrike Ø Knowledge in Vulnerability and assessment Office365 Ø Deep knowledge in Office365 (Creation of mail, Backup and archive, Security rules, Security Filters, Creation of Distribution list etc) Ø Knowledge in MX, TX and other records Ø Deep knowledge in Office365 Apps like Teams, Outlook, Excel etc Ø SharePoint management Other Tasks: Ø Hardware Servicing Laptops and desktops Ø Maintaining Asset inventory up to date. Ø Managing the utility invoices. Ø Handling L1 and L2 troubleshooting Ø Vendor Management Ø Handling application related issues Ø Website hosting and monitoring Ø Tracking all Software licenses, Cloud Service renewal period and ensue they are renewed on time. Ø Monitoring, managing and troubleshooting servers. Ø Knowledge in NAS Ø Knowledge in EndPoint Central tool and Ticketing tool. Show more Show less

Posted 3 days ago

Apply

Exploring Apache Jobs in India

Apache is a widely used software foundation that offers a range of open-source software solutions. In India, the demand for professionals with expertise in Apache tools and technologies is on the rise. Job seekers looking to pursue a career in Apache-related roles have a plethora of opportunities in various industries. Let's delve into the Apache job market in India to gain a better understanding of the landscape.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their thriving IT sectors and see a high demand for Apache professionals across different organizations.

Average Salary Range

The salary range for Apache professionals in India varies based on experience and skill level. - Entry-level: INR 3-5 lakhs per annum - Mid-level: INR 6-10 lakhs per annum - Experienced: INR 12-20 lakhs per annum

Career Path

In the Apache job market in India, a typical career path may progress as follows: 1. Junior Developer 2. Developer 3. Senior Developer 4. Tech Lead 5. Architect

Related Skills

Besides expertise in Apache tools and technologies, professionals in this field are often expected to have skills in: - Linux - Networking - Database Management - Cloud Computing

Interview Questions

  • What is Apache HTTP Server and how does it differ from Apache Tomcat? (medium)
  • Explain the difference between Apache Hadoop and Apache Spark. (medium)
  • What is mod_rewrite in Apache and how is it used? (medium)
  • How do you troubleshoot common Apache server errors? (medium)
  • What is the purpose of .htaccess file in Apache? (basic)
  • Explain the role of Apache Kafka in real-time data processing. (medium)
  • How do you secure an Apache web server? (medium)
  • What is the significance of Apache Maven in software development? (basic)
  • Explain the concept of virtual hosts in Apache. (basic)
  • How do you optimize Apache web server performance? (medium)
  • Describe the functionality of Apache Solr. (medium)
  • What is the purpose of Apache Camel? (medium)
  • How do you monitor Apache server logs? (medium)
  • Explain the role of Apache ZooKeeper in distributed applications. (advanced)
  • How do you configure SSL/TLS on an Apache web server? (medium)
  • Discuss the advantages of using Apache Cassandra for data management. (medium)
  • What is the Apache Lucene library used for? (basic)
  • How do you handle high traffic on an Apache server? (medium)
  • Explain the concept of .htpasswd in Apache. (basic)
  • What is the role of Apache Thrift in software development? (advanced)
  • How do you troubleshoot Apache server performance issues? (medium)
  • Discuss the importance of Apache Flume in data ingestion. (medium)
  • What is the significance of Apache Storm in real-time data processing? (medium)
  • How do you deploy applications on Apache Tomcat? (medium)
  • Explain the concept of .htaccess directives in Apache. (basic)

Conclusion

As you embark on your journey to explore Apache jobs in India, it is essential to stay updated on the latest trends and technologies in the field. By honing your skills and preparing thoroughly for interviews, you can position yourself as a competitive candidate in the Apache job market. Stay motivated, keep learning, and pursue your dream career with confidence!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies