Home
Jobs

4841 Apache Jobs - Page 23

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Overview: The person will be responsible for expanding and optimizing our data and data pipeline architecture. The ideal candidate is an experienced data pipeline builder who enjoys optimizing data systems and building them from the ground up. You’ll be Responsible for ? Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Cloud technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. You’d have? We are looking for a candidate with 3+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: Experience with data pipeline and workflow management tools: Apache Airflow, NiFi, Talend etc. • Experience with relational SQL and NoSQL databases, including Clickhouse, Postgres and MySQL. Experience with stream-processing systems: Storm, Spark-Streaming, Kafka etc. Experience with object-oriented/object function scripting languages: Python, Scala, etc. Experience building and optimizing data pipelines, architectures and data sets. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. Working knowledge of message queuing, stream processing, and highly scalable data stores Why Join us? Impactful Work: Play a pivotal role in safeguarding Tanla's assets, data, and reputation in the industry. Tremendous Growth Opportunities: Be part of a rapidly growing company in the telecom and CPaaS space, with opportunities for professional development. Innovative Environment: Work alongside a world-class team in a challenging and fun environment, where innovation is celebrated. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive environment for all employees. www.tanla.com Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

About Us Newfold Digital is a leading web technology company serving nearly 7 million customers globally. Established in 2021 through the combination of leading web services providers Endurance Web Presence and Web.com Group, Newfold’s mission is to empower success in a connected world with a focus on helping businesses of all sizes thrive online. The company's world-class family of brands includes BlueHost, HostGator, iPage, Domain.com, A Small Orange, MOJO Marketplace, BigRock, and ResellerClub. What you’ll do? Participate in 24x7 shifts Monitor the stability of our products with various internal tools. L1 Support ownership of all hosting products (cPanel/Plesk/VPS/Cloud/Dedicated). Handle incident response, troubleshooting, and fix for various products/services. Handle escalations as per policies/procedures. Get different internal/external groups together to resolve production site issues effectively. Communicate clearly on tickets and phone calls made to the team about various issues. Exhibit a sense of urgency to resolve issues. Build advanced automation workflows for automating repeated issues. Work with our infrastructure team to deploy and maintain Linux/Windows servers using automated scripts and a predefined runbook. Ensure SLA's and Operational standards are met. Raise tickets to different internal groups to resolve recurrent problems and alerts, and follow up on escalated issues. Liaison with engineering teams for RCA's, permanent resolutions on issues affecting production sites. Contribute to Operations handbook. Ensure smooth hand-offs between shifts. Who you are? Educational Qualifications: Graduate, preferably in Information Technology or Computer Science. Consistently strong academic performance. Linux: Good understanding of Linux Systems, Any Shell/Bash, sed/awk/grep/egrep, VI/VIM/Emacs, netstat, lsof, strace, ps/top/atop/dstat, grub boot config & systems rescue, fstab/disk labels, ext3/ext4, IPtables, sysstat (sar/vmstat/iostat etc), run levels & startup scripts, sudo/chroot/chkrootkit/rkhunter. Windows: Windows 2000/2003/2008, NTFS chkdisk/acls etc, troubleshoot system/application faults using Event logs, Updates via WSUS, Terminal Services, IIS Fundamentals Fundamentals: Basic DNS & Networking, TCP/UDP, IP Routing, HA & Load Balancing Concepts. Application Protocols: SMTP, HTTP, FTP,IMAP, POP. Shifts: Must be willing to work in shifts (including at night and on holidays). Good To Have Understanding of Cloud Systems/Hardware: RAID, LOM/IPMI/IP KVMs, Dell Hardware. Windows: WMI, Powershell/VB scripts, MS-SQL Fundamentals. Applications: Postfix/qmail/Exim, Database Systems Fundamentals (MySQL/Postgres),Nginx/Apache (mod_php, mod_fcgid, CGI, php-fpm etc), Tomcat. Tools/Utilities: Nagios, DHCP, Kickstart/Cobbler, Yum, RPM, GIT/SVN Others: Regular expressions, Rescue Kits like TRK, etc. Certification: Red Hat Certified Engineer (RHCE), GCP Why You’ll Love Us In this era of COVID-19, we believe in putting our employees first and keeping them safe. We were one of the first technology companies to make significant changes to our office environments and team interactions, including mandatory working from home and safety procedures to enter our office space. We are committed to no face-to-face interaction with our employees until the data shows it is entirely safe for our teams. Here is just a snippet of what we think you’ll love: Grow together. Our exciting virtual learning & development programs never cease to amaze us. Participate in our Expert Speak sessions/E-learning courses to grow professionally & personally. Work with creative & innovative teams. We believe in hiring the best of the best and are proud of being surrounded by people who think out of the box to only better our products, work & customer experiences. Did someone say free domain? Building a community one domain at a time, one employee at a time. All our employees are eligible for a free domain and WordPress blog as we sponsor the domain registration costs. Leave your worries aside! Juggling the demands of career and personal life can be stressful and challenging but don’t worry! Our employee assistance program services provide free, confidential, short-term counseling. This benefit is also extended to an immediate family member. This Job Description includes the essential job functions required to perform the job described above, as well as additional duties and responsibilities. This Job Description is not an exhaustive list of all functions that the employee performing this job may be required to perform. The Company reserves the right to revise the Job Description at any time, and to require the employee to perform functions in addition to those listed above. Show more Show less

Posted 3 days ago

Apply

4.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Show more Show less

Posted 3 days ago

Apply

14.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Requirements Description and Requirements Position Summary: A highly skilled Big Data (Hadoop) Administrator responsible for the installation, configuration, engineering, and architecture of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, scripting, and infrastructure-as-code for automating and optimizing operations is highly desirable. Experience in collaborating with cross-functional teams, including application development, infrastructure, and operations, is highly preferred. Job Responsibilities: Manages the design, distribution, performance, replication, security, availability, and access requirements for large and complex Big Data clusters. Designs and develops the architecture and configurations to support various application needs; implements backup, recovery, archiving, conversion strategies, and performance tuning; manages job scheduling, application release, cluster changes, and compliance. Identifies and resolves issues utilizing structured tools and techniques. Provides technical assistance and mentoring to staff in all aspects of Hadoop cluster management; consults and advises application development teams on security, query optimization, and performance. Writes scripts to automate routine cluster management tasks and documents maintenance processing flows per standards. Implement industry best practices while performing Hadoop cluster administration tasks. Works in an Agile model with a strong understanding of Agile concepts. Collaborates with development teams to provide and implement new features. Debugs production issues by analyzing logs directly and using tools like Splunk and Elastic. Address organizational obstacles to enhance processes and workflows. Adopts and learns new technologies based on demand and supports team members by coaching and assisting. Education: Bachelor’s degree in computer science, Information Systems, or another related field with 14+ years of IT and Infrastructure engineering work experience. Experience: 14+ Years Total IT experience & 10+ Years relevant experience in Big Data database Technical Skills: Big Data Platform Management : Big Data Platform Management: Expertise in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Data Infrastructure & Security : Proficient in designing and implementing robust data infrastructure solutions with a strong focus on data security, utilizing tools like Apache Ranger and Kerberos. Performance Tuning & Optimization : Skilled in performance tuning and optimization of big data environments, leveraging advanced techniques to enhance system efficiency and reduce latency. Backup & Recovery : Experienced in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity. Linux & Troubleshooting : Strong knowledge of Linux operating systems , with proven ability to troubleshoot and resolve complex technical issues, collaborating effectively with cross-functional teams. DevOps & Scripting : Proficient in scripting and automation using tools like Ansible, enabling seamless integration and automation of cluster operations. Experienced in infrastructure-as-code practices and observability tools such as Elastic. Agile & Collaboration : Strong understanding of Agile SAFe for Teams, with the ability to work effectively in Agile environments and collaborate with cross-functional teams. ITSM Process & Tools : Knowledgeable in ITSM processes and tools such as ServiceNow. Other Critical Requirements: Automation and Scripting : Proficiency in automation tools and programming languages such as Ansible and Python to streamline operations and improve efficiency. Analytical and Problem-Solving Skills : Strong analytical and problem-solving abilities to address complex technical challenges in a dynamic enterprise environment. 24x7 Support : Ability to work in a 24x7 rotational shift to support Hadoop platforms and ensure high availability. Team Management and Leadership : Proven experience managing geographically distributed and culturally diverse teams, with strong leadership, coaching, and mentoring skills. Communication Skills : Exceptional written and oral communication skills, with the ability to clearly articulate technical and functional issues, conclusions, and recommendations to stakeholders at all levels. Stakeholder Management : Prior experience in effectively managing both onshore and offshore stakeholders, ensuring alignment and collaboration across teams. Business Presentations : Skilled in creating and delivering impactful business presentations to communicate key insights and recommendations. Collaboration and Independence : Demonstrated ability to work independently as well as collaboratively within a team environment, ensuring successful project delivery in a complex enterprise setting. About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

At NiCE, we don’t limit our challenges. We challenge our limits. Always. We’re ambitious. We’re game changers. And we play to win. We set the highest standards and execute beyond them. And if you’re like us, we can offer you the ultimate career opportunity that will light a fire within you. So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 8 to 12 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7326 Reporting into: Tech Manager Role Type: Individual Contributor About NiCE NICE Ltd. (NASDAQ: NICE) software products are used by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver extraordinary customer experiences, fight financial crime and ensure public safety. Every day, NiCE software manages more than 120 million customer interactions and monitors 3+ billion financial transactions. Known as an innovation powerhouse that excels in AI, cloud and digital, NiCE is consistently recognized as the market leader in its domains, with over 8,500 employees across 30+ countries. NiCE is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, marital status, ancestry, neurotype, physical or mental disability, veteran status, gender identity, sexual orientation or any other category protected by law. Show more Show less

Posted 3 days ago

Apply

7.0 years

0 Lacs

Mulshi, Maharashtra, India

On-site

Linkedin logo

Company Description We at Prometteur Solutions Pvt. Ltd. are a team of IT experts, who came with a promise of delivering technology-empowered business solutions. We provide world-class software and web development services that focus on playing a supportive role to your business and its holistic growth. Our highly-skilled associates and global delivery capabilities ensure the accessibility and scale to align client's technology solutions with their business needs. Our offerings span the entire IT lifecycle: from Consulting through Packaged, Custom, and Cloud Applications as well as a variety of Infrastructure Services. Job Description Experience- 7+ Years Location: Bangalore NP: Immediate Joiner 7+ years of Software development experience Good “Go” implementation capabilities. Understanding of different design principles. Good understanding of Linux OS - memory, instruction processing, filesystem, system daemons etc. Fluent with linux command line and shell scripting. Working knowledge of servers (nginx, apache, etc.), proxy-servers, and load balancing. Understanding of service based architecture and microservices. Working knowledge of AV codecs, MpegTS and adaptive streaming like Dash, HLS. Good understanding of computer networking concepts. Working knowledge of relational Databases. Good analytical and debugging skills. Knowledge of git or any other source code management. Good to Have Skills: Working knowledge of Core Java and Python are preferred. Exposure to cloud computing is preferred. Exposure to API or video streaming performance testing is preferred. Preferred experience in Elasticsearch and Kibana (ELK Stack) Qualifications Bachelor’s degree in Computer Science Engineering or a related field Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

At NiCE, we don’t limit our challenges. We challenge our limits. Always. We’re ambitious. We’re game changers. And we play to win. We set the highest standards and execute beyond them. And if you’re like us, we can offer you the ultimate career opportunity that will light a fire within you. So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 5 to 7 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7556 Reporting into: Tech Manager Role Type: Individual Contributor About NiCE NICE Ltd. (NASDAQ: NICE) software products are used by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver extraordinary customer experiences, fight financial crime and ensure public safety. Every day, NiCE software manages more than 120 million customer interactions and monitors 3+ billion financial transactions. Known as an innovation powerhouse that excels in AI, cloud and digital, NiCE is consistently recognized as the market leader in its domains, with over 8,500 employees across 30+ countries. NiCE is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, marital status, ancestry, neurotype, physical or mental disability, veteran status, gender identity, sexual orientation or any other category protected by law. Show more Show less

Posted 3 days ago

Apply

7.0 years

0 Lacs

Mohali district, India

On-site

Linkedin logo

Job Summary: We are seeking a skilled Database & ETL Developer with strong expertise in SQL, data modeling, cloud integration, and reporting tools. The ideal candidate will be responsible for designing scalable database architectures, optimizing complex queries, working with modern ETL tools, and delivering insightful dashboards and reports. This role requires collaboration across multiple teams and the ability to manage tasks in a dynamic, agile environment. Experience: 7+ Years Location: Mohali (Work from Office Only) Key Responsibilities: Design and implement normalized and scalable database schemas using ER modeling techniques. Develop, maintain, and optimize stored procedures, triggers, views, and advanced SQL queries (e.g., joins, subqueries, indexing). Execute database backup, restore, and recovery operations ensuring data integrity and high availability. Optimize SQL performance through indexing strategies, execution plans, and query refactoring. Lead and support cloud migration and integration projects involving platforms such as AWS and Azure. Implement and manage data lake architectures such as AWS HealthLake and AWS Glue pipelines. Create and manage interactive dashboards and business reports using Power BI, Amazon QuickSight, or Tableau. Collaborate with cross-functional teams and use tools like JIRA, Azure Boards, ClickUp, or Trello for task and project tracking. Required Skills: Strong experience with SQL Server or PostgreSQL , including advanced T-SQL programming. In-depth knowledge of ER modeling and relational database design principles. Proficiency in query optimization , indexing, joins, and subqueries. Hands-on experience with modern ETL and data integration platforms such as Airbyte, Apache Airflow, Azure Data Factory (ADF), and AWS Glue . Understanding of Data Lake / Health Lake architectures and their role in cloud data ecosystems. Proficiency with reporting tools like Power BI, Amazon QuickSight, or Tableau . Experience with database backup, restore , and high availability strategies . Familiarity with project/task tracking tools such as JIRA, Azure Boards, ClickUp, or Trello . Soft Skills: Strong verbal and written communication skills. Excellent problem-solving and troubleshooting abilities. Self-motivated with the ability to manage priorities and work independently across multiple projects. Nice to Have: Certification in cloud platforms (AWS, Azure). Exposure to healthcare data standards and compliance (e.g., HIPAA, FHIR). Company overview: smartData is a leader in global software business space when it comes to business consulting and technology integrations making business easier, accessible, secure and meaningful for its target segment of startups to small & medium enterprises. As your technology partner, we provide both domain and technology consulting and our inhouse products and our unique productized service approach helps us to act as business integrators saving substantial time to market for our esteemed customers. With 8000+ projects, vast experience of 20+ years, backed by offices in the US, Australia, and India, providing next door assistance and round-the-clock connectivity, we ensure continual business growth for all our customers. Our business consulting and integrator services via software solutions focus on important industries of healthcare, B2B, B2C, & B2B2C platforms, online delivery services, video platform services, and IT services. Strong expertise in Microsoft, LAMP stack, MEAN/MERN stack with mobility first approach via native (iOS, Android, Tizen) or hybrid (React Native, Flutter, Ionic, Cordova, PhoneGap) mobility stack mixed with AI & ML help us to deliver on the ongoing needs of customers continuously. For more information, visit http://www.smartdatainc.com Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Raipur, Chhattisgarh, India

On-site

Linkedin logo

Role Summary We are seeking a highly motivated and skilled Data Engineer to join our data and analytics team. This role is ideal for someone with strong experience in building scalable data pipelines, working with modern lakehouse architectures, and deploying data solutions on Microsoft Azure. You’ll be instrumental in developing, orchestrating, and maintaining our real-time and batch data infrastructure using tools like Apache Spark, Apache Kafka, Apache Airflow, Azure Data Services, and modern DevOps practices. Key Responsibilities Design and implement ETL/ELT data pipelines for structured and unstructured data using Azure Data Factory, Databricks, or Apache Spark. Work with Azure Blob Storage, Data Lake, and Synapse Analytics to build scalable data lakes and warehouses. Develop real-time data ingestion pipelines using Apache Kafka, Apache Flink, or Apache Beam. Build and schedule jobs using orchestration tools like Apache Airflow or Dagster. Perform data modeling using Kimball methodology for building dimensional models in Snowflake or other data warehouses. Implement data versioning and transformation using DBT and Apache Iceberg or Delta Lake. Manage data cataloging and lineage using tools like Marquez or Collibra. Collaborate with DevOps teams to containerize solutions using Docker, manage infrastructure with Terraform, and deploy on Kubernetes. Setup and maintain monitoring and alerting systems using Prometheus and Grafana for performance and reliability. Required Skills & Qualifications Programming & Scripting: Proficiency in Python, with strong knowledge of OOP and data structures & algorithms. Comfortable working in Linux environments for development and deployment. Database Technologies: Strong command over SQL and understanding of relational (DBMS) and NoSQL databases. Big Data & Real-Time Processing: Solid experience with Apache Spark (PySpark/Scala). Familiarity with real-time processing tools like Kafka, Flink, or Beam. Orchestration & Scheduling: Hands-on experience with Airflow, Dagster, or similar orchestration tools. Cloud Platform: Deep experience with Microsoft Azure, especially Azure Data Factory, Blob Storage, Synapse, Azure Functions, etc. AZ-900 or other Azure certifications are a plus. Lakehouse & Warehousing Knowledge of dimensional modeling, Snowflake, Apache Iceberg, and Delta Lake. Understanding of modern Lakehouse architecture and related best practices. Data Cataloging & Governance Familiarity with Marquez, Collibra, or other cataloging tools. DevOps & CI/CD Experience with Terraform, Docker, Kubernetes, and Jenkins or equivalent CI/CD tools. Monitoring & Logging Proficiency in setting up dashboards and alerts with Prometheus and Grafana. Note: - Immediate joiner will be preferred. Show more Show less

Posted 3 days ago

Apply

8.0 - 10.0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Location : Bengaluru / Delhi Reports To : Chief Revenue Officer Position Overview: We are looking for a highly motivated Pre-Sales Specialist to join our team at Neysa, a rapidly growing AI Cloud Platform company that's making waves in the industry. The role is a customer-facing technical position that will work closely with sales teams to understand client requirements, design tailored solutions and drive technical engagements. You will be responsible for presenting complex technology solutions to customers, creating compelling demonstrations, and assisting in the successful conversion of sales opportunities. Key Responsibilities: Solution Design & Customization : Work closely with customers to understand their business challenges and technical requirements. Design and propose customized solutions leveraging Cloud, Network, AI, and Machine Learning technologies that best fit their needs. Sales Support & Enablement : Collaborate with the sales team to provide technical support during the sales process, including delivering presentations, conducting technical demonstrations, and assisting in the development of proposals and RFP responses. Customer Engagement : Engage with prospects and customers throughout the sales cycle, providing technical expertise and acting as the technical liaison between the customer and the company. Conduct deep-dive discussions and workshops to uncover technical requirements and offer viable solutions. Proof of Concept (PoC) : Lead the technical aspects of PoC engagements, demonstrating the capabilities and benefits of the proposed solutions. Collaborate with the customer to validate the solution, ensuring it aligns with their expectations. Product Demos & Presentations : Deliver compelling product demos and presentations tailored to the customer’s business and technical needs, helping organizations unlock innovation and growth through AI. Simplify complex technical concepts to ensure that both business and technical stakeholders understand the value proposition. Proposal Development & RFPs : Assist in crafting technical proposals, responding to RFPs (Request for Proposals), and providing technical content that highlights the company’s offerings, differentiators, and technical value. Technical Workshops & Trainings : Facilitate customer workshops and training sessions to enable customers to understand the architecture, functionality, and capabilities of the solutions offered. Collaboration with Product & Engineering Teams : Provide feedback to product management and engineering teams based on customer interactions and market demands. Help shape future product offerings and improvements. Market & Competitive Analysis : Stay up-to-date on industry trends, new technologies, and competitor offerings in AI and Machine Learning, Cloud and Networking, to provide strategic insights to sales and product teams. Documentation & Reporting : Create and maintain technical documentation, including solution designs, architecture diagrams, and deployment plans. Track and report on pre-sales activities, including customer interactions, pipeline status, and PoC results. Key Skills and Qualifications: Experience : Minimum of 8-10 years of experience in a pre-sales or technical sales role, with a focus on AI, Cloud and Networking solutions. Technical Expertise : Solid understanding of Cloud computing, Data Center infrastructure, Networking (SDN, SD-WAN, VPNs), and emerging AI/ML technologies. Experience with architecture design and solutioning across these domains, especially in hybrid cloud and multi-cloud environments. Familiarity with tools such as Kubernetes, Docker, TensorFlow, Apache Hadoop, and machine learning frameworks. Sales Collaboration : Ability to work alongside sales teams, providing the technical expertise needed to close complex deals. Experience in delivering customer-focused presentations and demos. Presentation & Communication Skills : Exceptional ability to articulate technical solutions to both technical and non-technical stakeholders. Strong verbal and written communication skills. Customer-Focused Mindset : Excellent customer service skills with a consultative approach to solving customer problems. Ability to understand business challenges and align technical solutions accordingly. Having the mindset to build rapport with customers and become their trusted advisor. Problem-Solving & Creativity : Strong analytical and problem-solving skills, with the ability to design creative, practical solutions that align with customer needs. Certifications : Degree in Computer Science, Engineering, or a related field Cloud and AI / ML certifications are highly desirable Team Player : Ability to work collaboratively with cross-functional teams including product, engineering, and delivery teams. Preferred Qualifications: Industry Experience : Experience in delivering solutions in industries such as finance, healthcare, or telecommunications is a plus. Technical Expertise in AI/ML : A deeper understanding of AI/ML applications, including natural language processing (NLP), computer vision, predictive analytics, or data science use cases. Experience with DevOps Tools : Familiarity with CI/CD pipelines, infrastructure as code (IaC), and automation tools like Terraform, Ansible, or Jenkins. Show more Show less

Posted 3 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Summary We are seeking an experienced Data Architect with expertise in Snowflake, dbt, Apache Airflow, and AWS to design, implement, and optimize scalable data solutions. The ideal candidate will play a critical role in defining data architecture, governance, and best practices while collaborating with cross-functional teams to drive data-driven decision-making. Key Responsibilities Data Architecture & Strategy: Design and implement scalable, high-performance cloud-based data architectures on AWS. Define data modelling standards for structured and semi-structured data in Snowflake. Establish data governance, security, and compliance best practices. Data Warehousing & ETL/ELT Pipelines: Develop, maintain, and optimize Snowflake-based data warehouses. Implement dbt (Data Build Tool) for data transformation and modelling. Design and schedule data pipelines using Apache Airflow for orchestration. Cloud & Infrastructure Management: Architect and optimize data pipelines using AWS services like S3, Glue, Lambda, and Redshift. Ensure cost-effective, highly available, and scalable cloud data solutions. Collaboration & Leadership: Work closely with data engineers, analysts, and business stakeholders to align data solutions with business goals. Provide technical guidance and mentoring to the data engineering team. Performance Optimization & Monitoring: Optimize query performance and data processing within Snowflake. Implement logging, monitoring, and alerting for pipeline reliability. Required Skills & Qualifications 10+ years of experience in data architecture, engineering, or related roles. Strong expertise in Snowflake, including data modeling, performance tuning, and security best practices. Hands-on experience with dbt for data transformations and modeling. Proficiency in Apache Airflow for workflow orchestration. Strong knowledge of AWS services (S3, Glue, Lambda, Redshift, IAM, EC2, etc.). Experience with SQL, Python, or Spark for data processing. Familiarity with CI/CD pipelines, Infrastructure-as-Code (Terraform/CloudFormation) is a plus. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, etc.). Preferred Qualifications Certifications: AWS Certified Data Analytics – Specialty, Snowflake SnowPro Certification, or dbt Certification. Experience with streaming technologies (Kafka, Kinesis) is a plus. Knowledge of modern data stack tools (Looker, Power BI, etc.). Experience in OTT streaming could be added advantage. Show more Show less

Posted 3 days ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: IT Administrator with Networking & Server Administration Location : Hyderabad Experience : 6 months – 2 years Job Type: Paid Internship About Us: Instaresz Business Services Pvt Ltd is a forward-thinking, fast-growing technology company that thrives on innovative solutions. We are currently looking for an experienced IT Administrator who will take responsibility for managing and maintaining the network infrastructure, servers, and systems while ensuring smooth day-to-day IT operations across the organization. Key Responsibilities: Set up, configure, and maintain LAN/WAN networks, routers, switches, firewalls, and VPNs. Administer Windows/Linux servers, Active Directory, DNS, DHCP, and user access controls. Manage software and OS package installations using tools like apt, yum, dnf, and rpm. Monitor and troubleshoot network and system performance issues. Maintain web, file, mail, and database servers (Apache, Nginx, Postfix, MySQL, etc.). Implement and monitor IT security measures including firewalls, antivirus, and access policies. Perform system backups, restore processes, and support disaster recovery plans. Support virtualization platforms (VMware, Hyper-V) and assist with basic cloud infrastructure (AWS, Azure). Automate tasks using PowerShell or Bash scripting. Document IT procedures, configurations, and network diagrams. Required Skills & Qualifications: Proven Experience in IT system administration, networking, and server management. Hands-on Knowledge of networking protocols, IP addressing, subnetting, and VPNs. Experience with network devices such as routers, switches, and firewalls. Proficient in Windows Server (Active Directory, Group Policies, DNS, DHCP) and Linux administration (Ubuntu, CentOS, RHEL). In-depth knowledge of server administration , including web servers (Apache, Nginx), databases (MySQL, PostgreSQL), and mail servers (Postfix, Exchange). Experience with package management tools (apt, yum, dnf, rpm). Familiarity with cloud platforms (AWS, Azure) and virtualization tools (VMware, Hyper-V). Strong understanding of IT security practices , including firewalls, antivirus, VPNs, and access management. Scripting skills for automation (PowerShell, Bash). Excellent problem-solving and troubleshooting abilities. Preferred Certifications: CompTIA Network+ CompTIA Security+ Microsoft Certified: Windows Server / Azure Administrator Cisco Certified Network Associate (CCNA) Red Hat Certified System Administrator (RHCSA) ITIL Foundation (For IT Service Management) Additional Skills (Good to Have): Experience with containerization technologies (Docker, Kubernetes). Knowledge of Version Control Systems (Git). Why Join Us: Competitive salary and performance-based incentives Dynamic and collaborative work environment Opportunities for learning and growth Exposure to cutting-edge technologies and industry trends Show more Show less

Posted 3 days ago

Apply

7.0 years

40 Lacs

India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 days ago

Apply

7.0 years

0 Lacs

India

Remote

Linkedin logo

About Lemongrass Lemongrass is a software-enabled services provider, synonymous with SAP on Cloud, focused on delivering superior, highly automated Managed Services to Enterprise customers. Our customers span multiple verticals and geographies across the Americas, EMEA and APAC. We partner with AWS, SAP, Microsoft and other global technology leaders. We are seeking an experienced Cloud Data Engineer with a strong background in AWS, Azure, and GCP. The ideal candidate will have extensive experience with cloud-native ETL tools such as AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, and other ETL tools like Informatica, SAP Data Intelligence, etc. You will be responsible for designing, implementing, and maintaining robust data pipelines and building scalable data lakes. Experience with various data platforms like Redshift, Snowflake, Databricks, Synapse, Snowflake and others is essential. Familiarity with data extraction from SAP or ERP systems is a plus. Key Responsibilities: Design and Development: Design, develop, and maintain scalable ETL pipelines using cloud-native tools (AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, etc.). Architect and implement data lakes and data warehouses on cloud platforms (AWS, Azure, GCP). Develop and optimize data ingestion, transformation, and loading processes using Databricks, Snowflake, Redshift, BigQuery and Azure Synapse. Implement ETL processes using tools like Informatica, SAP Data Intelligence, and others. Develop and optimize data processing jobs using Spark Scala. Data Integration and Management: Integrate various data sources, including relational databases, APIs, unstructured data, and ERP systems into the data lake. Ensure data quality and integrity through rigorous testing and validation. Perform data extraction from SAP or ERP systems when necessary. Performance Optimization: Monitor and optimize the performance of data pipelines and ETL processes. Implement best practices for data management, including data governance, security, and compliance. Collaboration and Communication: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Collaborate with cross-functional teams to design and implement data solutions that meet business needs. Documentation and Maintenance: Document technical solutions, processes, and workflows. Maintain and troubleshoot existing ETL pipelines and data integrations. Qualifications Education: Bachelor’s degree in Computer Science, Information Technology, or a related field. Advanced degrees are a plus. Experience: 7+ years of experience as a Data Engineer or in a similar role. Proven experience with cloud platforms: AWS, Azure, and GCP. Hands-on experience with cloud-native ETL tools such as AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, etc. Experience with other ETL tools like Informatica, SAP Data Intelligence, etc. Experience in building and managing data lakes and data warehouses. Proficiency with data platforms like Redshift, Snowflake, BigQuery, Databricks, and Azure Synapse. Experience with data extraction from SAP or ERP systems is a plus. Strong experience with Spark and Scala for data processing. Skills: Strong programming skills in Python, Java, or Scala. Proficient in SQL and query optimization techniques. Familiarity with data modeling, ETL/ELT processes, and data warehousing concepts. Knowledge of data governance, security, and compliance best practices. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Qualifications: Experience with other data tools and technologies such as Apache Spark, or Hadoop. Certifications in cloud platforms (AWS Certified Data Analytics – Specialty, Google Professional Data Engineer, Microsoft Certified: Azure Data Engineer Associate). Experience with CI/CD pipelines and DevOps practices for data engineering Selected applicant will be subject to a background investigation, which will be conducted and the results of which will be used in compliance with applicable law. What we offer in return: Remote Working: Lemongrass always has been and always will offer 100% remote work Flexibility: Work where and when you like most of the time Training: A subscription to A Cloud Guru and generous budget for taking certifications and other resources you’ll find helpful State of the art tech: An opportunity to learn and run the latest industry standard tools Team: Colleagues who will challenge you giving the chance to learn from them and them from you Lemongrass Consulting is proud to be an Equal Opportunity and Affirmative Action employer. We do not discriminate on the basis of race, religion, color, national origin, religious creed, gender, sexual orientation, gender identity, gender expression, age, genetic information, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics Show more Show less

Posted 3 days ago

Apply

7.0 years

40 Lacs

Kochi, Kerala, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 days ago

Apply

7.0 years

40 Lacs

Greater Bhopal Area

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 days ago

Apply

7.0 years

40 Lacs

Indore, Madhya Pradesh, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 days ago

Apply

7.0 years

40 Lacs

Visakhapatnam, Andhra Pradesh, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 days ago

Apply

7.0 years

40 Lacs

Chandigarh, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 days ago

Apply

7.0 years

40 Lacs

Thiruvananthapuram, Kerala, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 days ago

Apply

7.0 years

40 Lacs

Dehradun, Uttarakhand, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 days ago

Apply

7.0 years

40 Lacs

Vijayawada, Andhra Pradesh, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 days ago

Apply

7.0 years

40 Lacs

Mysore, Karnataka, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 days ago

Apply

7.0 years

40 Lacs

Patna, Bihar, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 days ago

Apply

Exploring Apache Jobs in India

Apache is a widely used software foundation that offers a range of open-source software solutions. In India, the demand for professionals with expertise in Apache tools and technologies is on the rise. Job seekers looking to pursue a career in Apache-related roles have a plethora of opportunities in various industries. Let's delve into the Apache job market in India to gain a better understanding of the landscape.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their thriving IT sectors and see a high demand for Apache professionals across different organizations.

Average Salary Range

The salary range for Apache professionals in India varies based on experience and skill level. - Entry-level: INR 3-5 lakhs per annum - Mid-level: INR 6-10 lakhs per annum - Experienced: INR 12-20 lakhs per annum

Career Path

In the Apache job market in India, a typical career path may progress as follows: 1. Junior Developer 2. Developer 3. Senior Developer 4. Tech Lead 5. Architect

Related Skills

Besides expertise in Apache tools and technologies, professionals in this field are often expected to have skills in: - Linux - Networking - Database Management - Cloud Computing

Interview Questions

  • What is Apache HTTP Server and how does it differ from Apache Tomcat? (medium)
  • Explain the difference between Apache Hadoop and Apache Spark. (medium)
  • What is mod_rewrite in Apache and how is it used? (medium)
  • How do you troubleshoot common Apache server errors? (medium)
  • What is the purpose of .htaccess file in Apache? (basic)
  • Explain the role of Apache Kafka in real-time data processing. (medium)
  • How do you secure an Apache web server? (medium)
  • What is the significance of Apache Maven in software development? (basic)
  • Explain the concept of virtual hosts in Apache. (basic)
  • How do you optimize Apache web server performance? (medium)
  • Describe the functionality of Apache Solr. (medium)
  • What is the purpose of Apache Camel? (medium)
  • How do you monitor Apache server logs? (medium)
  • Explain the role of Apache ZooKeeper in distributed applications. (advanced)
  • How do you configure SSL/TLS on an Apache web server? (medium)
  • Discuss the advantages of using Apache Cassandra for data management. (medium)
  • What is the Apache Lucene library used for? (basic)
  • How do you handle high traffic on an Apache server? (medium)
  • Explain the concept of .htpasswd in Apache. (basic)
  • What is the role of Apache Thrift in software development? (advanced)
  • How do you troubleshoot Apache server performance issues? (medium)
  • Discuss the importance of Apache Flume in data ingestion. (medium)
  • What is the significance of Apache Storm in real-time data processing? (medium)
  • How do you deploy applications on Apache Tomcat? (medium)
  • Explain the concept of .htaccess directives in Apache. (basic)

Conclusion

As you embark on your journey to explore Apache jobs in India, it is essential to stay updated on the latest trends and technologies in the field. By honing your skills and preparing thoroughly for interviews, you can position yourself as a competitive candidate in the Apache job market. Stay motivated, keep learning, and pursue your dream career with confidence!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies