Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: Tech Lead – Middleware Administrator Location: Offshore/India Who are we looking for? We are looking for 10+ Years of Tech Lead – Middleware Administrator , A Middleware Administrator is responsible for the installation, configuration, maintenance, and optimization of middleware technologies that connect software applications and systems. They ensure seamless communication and data exchange between different applications and services. This includes tasks like troubleshooting, security implementation, performance monitoring, and documentation. Tech Stack: o IBM WAS (8 & 9) o IBM MQ o IBM MQ Advanced (RDQM) o Apache Tomcat o TomEE o Java (IBM / Open Source adopt) o IBM HTTP web server o IBM DataPower o IBM ODM (Operational Decision Manager) o Apache Webserver o Oracle WebLogic – Limited to vendor managed appliances and no major items to deal with. o Microsoft IIS o Site Minder –RSA o VDS (Virtual Directory Server) o IBM WAS Liberty o AWS API Gateway o Integration Microservices (Spring boot) o Oracle Documanage Bridge o Oracle Documaker o Actuate o MS Visual Studio o WinSCP o Adobe Technical Skills: · Strong Middleware for the installation, configuration, maintenance, and optimization of middleware technologies that connect software applications and systems Key Responsibilities: · Installation and Configuration: Installing and configuring middleware software and components according to business requirements and best practices. · Management: Managing various layers of middleware software, including troubleshooting, performance tuning, and system upgrades. · Troubleshooting and Problem Solving: Identifying and resolving technical issues related to middleware infrastructure. · Security: Implementing and maintaining security configurations and protocols for middleware environments. · Documentation: Creating and maintaining comprehensive documentation for middleware platforms, processes, and procedures. · Collaboration: Working with developers, other administrators, and stakeholders to ensure smooth operations and integration. · Performance Monitoring: Monitoring server and system performance and taking action to optimize performance. · Continuous Improvement: Staying up to date with the latest industry trends and technologies to drive continuous improvement and innovation. · DevOps and Agile Practices: Aligning middleware operations with DevOps and Agile principles and contributing to automation of middleware-related tasks. · Cloud Migration: Providing expertise and support for middleware platform migration to cloud environments. Qualification: · Education qualification: Any Graduate Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Delhi, India
On-site
Title of the Position: Senior Associate (IT) (On Contract) No. of Positions: 02 (UR) (01 position for PHP (LARAVEL) profile and 01 position for POWER BI profile). Qualification: BE/B.Tech. (Computer Science Engineering/Information Technology) /M.Tech/ MCA or equivalent from a recognized university A. Senior Associate (IT), 01 position for PHP (LARAVEL) profile Experience Required: Should have at least 5 years of post-qualification experience in building and maintaining robust web applications using PHP and the Laravel framework. The candidate should have experience in critical applications, ensuring the design and implementation of scalable, secure, and high-performing applications. The following skills are desired: Strong proficiency in PHP and Laravel framework. Well versed with RESTful API development and integration. Excellent understanding of HTML, CSS, JavaScript, and jQuery. Proven experience with Oracle database management. Familiarity with Node.js, JSON, and GitHub. Knowledge of token-based authentication and data security implementation. Hands-on experience with Apache, Linux, and Docker. Practical experience in Oracle Cloud Services implementation. Preferred Skills: Attention to detail and ability to write clean, maintainable code. Strong problem-solving and troubleshooting skills. Ability to work independently and collaboratively within cross-functional teams. Experience in the ESG domain & knowledge of Postgres database and Microsoft Power BI is advantageous. Experience with CI/CD pipeline is preferred. Key Objectives and Responsibilities: Develop and maintain web applications using Laravel and PHP. Build and integrate RESTful APIs to support application functionalities. Collaborate with frontend developers to implement responsive UI components using HTML, CSS, JavaScript, and jQuery. Manage and optimize Oracle databases for performance and reliability. Integrate third-party APIs and manage secure data exchanges. Implement token-based authentication and authorization mechanisms. Apply data security best practices using Apache server configurations. Utilize GitHub for version control and collaborative development. Work with JSON for data serialization and system integration. Contribute to containerized application development using Docker. Deploy and maintain applications in Oracle Cloud Infrastructure (OCI). Work in Linux environments for development and deployment tasks. B. Senior Associate (IT), 01 position for POWER BI Experience Required: Should have at least 5 years of post-qualification experience in designing, developing, and optimizing data visualizations and business intelligence solutions using Microsoft Power BI. The following skills are desired: Should have expertise in DAX, Power Query for efficient data modelling & calculations and integration with various data sources to deliver actionable insights. Ability to optimize Power BI performance for large datasets and enterprise-scale solutions. Preferred Skills: Strong analytical and problem-solving skills to interpret complex data sets. Excellent communication and collaboration abilities to work with stakeholders and cross-functional teams. Experience in data governance and security to ensure compliance with best practices. Adaptability to evolving business requirements and emerging technologies. Mentorship skills to guide junior team members in Power BI development. Experience in PHP(Laravel) frame shall be advantageous. Experience in POSTGRES database & CI/CD implementation is plus. Practical experience in Oracle Cloud Services implementation is plus. Key Objectives and Responsibilities: Develop and maintain interactive dashboards and reports using Power BI. Design and implement data models, ensuring accuracy and efficiency. Optimize DAX queries for performance and scalability. Integrate Power BI with multiple data sources, including SQL Server and cloud-based solutions. Ensure data governance and security best practices are followed. Collaborate with teams to translate business needs into visual analytics. Provide training and support to users on Power BI functionalities. Continuously enhance Power BI solutions to improve decision-making processes. Deploy and maintain applications in Oracle Cloud Infrastructure (OCI). Proficiency in Oracle database and data integration to connect multiple sources effectively. Develop and optimize Oracle and Postgres database scripts. HOW TO APPLY: Candidates fulfilling the above eligibility criteria may submit their Resume/Biodata through email at contract@ifciltd.com. Please inscribe “Title of the position” on the subject of the e-mail . Kindly enclose the self-attested photocopies of the following documents in the email: Proof of date of Birth Educational Certificates Relevant Experience certificates (containing areas and period of service) Note: LAST DATE FOR SUBMISSION THROUGH E-MAIL IS JUNE 26, 2025. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Role Overview:Develop efficient SQL queries and maintain views, models, and data structures across federated and transactional DB to support analytics and reporting. SQL (Advanced) Python – for data exploration and scripting Shell scripting – for lightweight automationKey Responsibilities: Write complex SQL queries for data extraction and transformations Build and maintain views, materialized views, and data models Enable efficient federated queries and optimize joins across databases Support performance tuning, indexing, and query optimization effortsPrimary: Expertise in MS SQL Server / Oracle DB / PostgresSQL , Columnar DBs like DuckDB , and federated data access Good understanding of Apache Arrow columnar data format, Flight SQL, Apache Calcite Secondary: Experience with data modelling, ER diagrams, and schema design Familiarity with reporting layer backend (e.g., Power BI datasets) Familiarity with utility operations and power distribution is preferred Experience with cloud-hosted databases is preferred Exposure to data lake in cloud ecosystems is a plusOptional Familiar with Grid CIM (Common Information Model; IEC 61970, IEC 61968) Familiarity with GE ADMS DNOM (Distribution Network Object Model) GE GridOS Data Fabric Show more Show less
Posted 1 day ago
5.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
You are passionate about quality and how customers experience the products you test. You have the ability to create, maintain and execute test plans in order to verify requirements. As a Quality Engineer at Equifax, you will be a catalyst in both the development and the testing of high priority initiatives. You will develop and test new products to support technology operations while maintaining exemplary standards. As a collaborative member of the team, you will deliver QA services (code quality, testing services, performance engineering, development collaboration and continuous integration). You will conduct quality control tests in order to ensure full compliance with specified standards and end user requirements. You will execute tests using established plans and scripts; documents problems in an issues log and retest to ensure problems are resolved. You will create test files to thoroughly test program logic and verify system flow. You will identify, recommend and implement changes to enhance effectiveness of QA strategies. What You Will Do Independently develop scalable and reliable automated tests and frameworks for testing software solutions. Specify and automate test scenarios and test data for a highly complex business by analyzing integration points, data flows, personas, authorization schemes and environments Develop regression suites, develop automation scenarios, and move automation to an agile continuous testing model. Pro-actively and collaboratively taking part in all testing related activities while establishing partnerships with key stakeholders in Product, Development/Engineering, and Technology Operations. What Experience You Need Bachelor's degree in a STEM major or equivalent experience 5-7 years of software testing experience Able to create and review test automation according to specifications Ability to write, debug, and troubleshoot code in Java, Springboot, TypeScript/JavaScript, HTML, CSS Creation and use of big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others with respect to software validation Created test strategies and plans Led complex testing efforts or projects Participated in Sprint Planning as the Test Lead Collaborated with Product Owners, SREs, Technical Architects to define testing strategies and plans. Design and development of micro services using Java, Springboot, GCP SDKs, GKE/Kubeneties Deploy and release software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: Attention to Detail - Define test case candidates for automation that are outside of product specifications. i.e. Negative Testing; Create thorough and accurate documentation of all work including status updates to summarize project highlights; validating that processes operate properly and conform to standards Automation - Automate defined test cases and test suites per project Collaboration - Collaborate with Product Owners and development team to plan and and assist with user acceptance testing; Collaborate with product owners, development leads and architects on functional and non-functional test strategies and plans Execution - Develop scalable and reliable automated tests; Develop performance testing scripts to assure products are adhering to the documented SLO/SLI/SLAs; Specify the need for Test Data types for automated testing; Create automated tests and tests data for projects; Develop automated regression suites; Integrate automated regression tests into the CI/CD pipeline; Work with teams on E2E testing strategies and plans against multiple product integration points Quality Control - Perform defect analysis, in-depth technical root cause analysis, identifying trends and recommendations to resolve complex functional issues and process improvements; Analyzes results of functional and non-functional tests and make recommendation for improvements; Performance / Resilience: Understanding application and network architecture as inputs to create performance and resilience test strategies and plans for each product and platform. Conducting the performance and resilience testing to ensure the products meet SLAs / SLOs Quality Focus - Review test cases for complete functional coverage; Review quality section of Production Readiness Review for completeness; Recommend changes to existing testing methodologies for effectiveness and efficiency of product validation; Ensure communications are thorough and accurate for all work documentation including status and project updates Risk Mitigation - Work with Product Owners, QE and development team leads to track and determine prioritization of defects fixes Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
India
On-site
This posting is for one of our International Clients. About the Role We’re creating a new certification: Inside Gemini: Gen AI Multimodal and Google Intelligence (Google DeepMind) . This course is designed for technical learners who want to understand and apply the capabilities of Google’s Gemini models and DeepMind technologies to build powerful, multimodal AI applications. We’re looking for a Subject Matter Expert (SME) who can help shape this course from the ground up. You’ll work closely with a team of learning experience designers, writers, and other collaborators to ensure the course is technically accurate, industry-relevant, and instructionally sound. Responsibilities As the SME, you’ll partner with learning experience designers and content developers to: Translate real-world Gemini and DeepMind applications into accessible, hands-on learning for technical professionals. Guide the creation of labs and projects that allow learners to build pipelines for image-text fusion, deploy Gemini APIs, and experiment with DeepMind’s reinforcement learning libraries. Contribute technical depth across activities, from high-level course structure down to example code, diagrams, voiceover scripts, and data pipelines. Ensure all content reflects current, accurate usage of Google’s multimodal tools and services. Be available during U.S. business hours to support project milestones, reviews, and content feedback. This role is an excellent fit for professionals with deep experience in AI/ML, Google Cloud, and a strong familiarity with multimodal systems and the DeepMind ecosystem. Essential Tools & Platforms A successful SME in this role will demonstrate fluency and hands-on experience with the following: Google Cloud Platform (GCP) Vertex AI (particularly Gemini integration, model tuning, and multimodal deployment) Cloud Functions, Cloud Run (for inference endpoints) BigQuery and Cloud Storage (for handling large image-text datasets) AI Platform Notebooks or Colab Pro Google DeepMind Technologies JAX and Haiku (for neural network modeling and research-grade experimentation) DeepMind Control Suite or DeepMind Lab (for reinforcement learning demonstrations) RLax or TF-Agents (for building and modifying RL pipelines) AI/ML & Multimodal Tooling Gemini APIs and SDKs (image-text fusion, prompt engineering, output formatting) TensorFlow 2.x and PyTorch (for model interoperability) Label Studio, Cloud Vision API (for annotation and image-text preprocessing) Data Science & MLOps DVC or MLflow (for dataset and model versioning) Apache Beam or Dataflow (for processing multimodal input streams) TensorBoard or Weights & Biases (for visualization) Content Authoring & Collaboration GitHub or Cloud Source Repositories Google Docs, Sheets, Slides Screen recording tools like Loom or OBS Studio Required skills and experience: Demonstrated hands-on experience building, deploying, and maintaining sophisticated AI powered applications using Gemini APIs/SDKs within the Google Cloud ecosystem, especially in Firebase Studio and VS Code. Proficiency in designing and implementing agent-like application patterns, including multi-turn conversational flows, state management, and complex prompting strategies (e.g., Chain-of Thought, few-shot, zero-shot). Experience integrating Gemini with Google Cloud services (Firestore, Cloud Functions, App Hosting) and external APIs for robust, production-ready solutions. Proven ability to engineer applications that process, integrate, and generate content across multiple modalities (text, images, audio, video, code) using Gemini’s native multimodal capabilities. Skilled in building and orchestrating pipelines for multimodal data handling, synchronization, and complex interaction patterns within application logic. Experience designing and implementing production-grade RAG systems, including integration with vector databases (e.g., Pinecone, ChromaDB) and engineering data pipelines for indexing and retrieval. Ability to manage agent state, memory, and persistence for multi-turn and long-running interactions. Proficiency leveraging AI-assisted coding features in Firebase Studio (chat, inline code, command execution) and using App Prototyping agents or frameworks like Genkit for rapid prototyping and structuring agentic logic. Strong command of modern development workflows, including Git/GitHub, code reviews, and collaborative development practices. Experience designing scalable, fault-tolerant deployment architectures for multimodal and agentic AI applications using Firebase App Hosting, Cloud Run, or similar serverless/cloud platforms. Advanced MLOps skills, including monitoring, logging, alerting, and versioning for generative AI systems and agents. Deep understanding of security best practices: prompt injection mitigation (across modalities), secure API key management, authentication/authorization, and data privacy. Demonstrated ability to engineer for responsible AI, including bias detection, fairness, transparency, and implementation of safety mechanisms in agentic and multimodal applications. Experience addressing ethical challenges in the deployment and operation of advanced AI systems. Proven success designing, reviewing, and delivering advanced, project-based curriculum and hands-on labs for experienced software developers and engineers. Ability to translate complex engineering concepts (RAG, multimodal integration, agentic patterns, MLOps, security, responsible AI) into clear, actionable learning materials and real world projects. 5+ years of professional experience in AI-powered application development, with a focus on generative and multimodal AI. Strong programming skills in Python and JavaScript/TypeScript; experience with modern frameworks and cloud-native development. Bachelor’s or Master’s degree in Computer Science, Data Engineering, AI, or a related technical field. Ability to explain advanced technical concepts (e.g., fusion transformers, multimodal embeddings, RAG workflows) to learners in an accessible way. Strong programming experience in Python and experience deploying machine learning pipelines Ability to work independently, take ownership of deliverables, and collaborate closely with designers and project managers Preferred: Experience with Google DeepMind tools (JAX, Haiku, RLax, DeepMind Control Suite/Lab) and reinforcement learning pipelines. Familiarity with open data formats (Delta, Parquet, Iceberg) and scalable data engineering practices. Prior contributions to open-source AI projects or technical community engagement. Show more Show less
Posted 1 day ago
2.0 years
0 Lacs
India
On-site
About Us Newfold Digital is a leading web technology company serving nearly 7 million customers globally. Established in 2021 through the combination of leading web services providers Endurance Web Presence and Web.com Group, Newfold’s mission is to empower success in a connected world with a focus on helping businesses of all sizes thrive online. The company's world-class family of brands includes BlueHost, HostGator, iPage, Domain.com, A Small Orange, MOJO Marketplace, BigRock, and ResellerClub. What you’ll do? Participate in 24x7 shifts Monitor the stability of our products with various internal tools. L1/L2 Support ownership of all hosting products (cpanel/plesk/vps/cloud/dedicated). Handle incident response, troubleshooting and fix for various products/services. Handle escalations as per policies/procedures. Get different internal/external groups together to resolve production site issues effectively. Communicate clearly on tickets, phone calls made to the team about various issues. Exhibit a sense of urgency to resolve issues. Build advanced automation workflows for automating repeated issues. Work with our infrastructure team to deploy and maintain Linux/Windows servers using automated scripts and a predefined runbook. Ensure SLA's and Operational standards are met. Raise tickets to different internal groups to resolve recurrent problems, alerts and follow up on escalated issues. Liaison with engineering teams for RCA's, permanent resolutions on issues affecting production sites. Contribute to Operations handbook. Ensure smooth hand-offs between shifts. Who you are? (2-3 years of experience) Educational Qualifications : Graduate, preferably in Information Technology or Computer Science Consistently strong academic performance. Linux: Goodunderstanding of Linux Systems, Any Shell/Bash, sed/awk/grep/egrep, VI/VIM/Emacs, netstat, lsof, strace, ps/top/atop/dstat, grub boot config & systems rescue, fstab/disk labels, ext3/ext4, IPtables, sysstat (sar/vmstat/iostat etc), run-levels & startup scripts, sudo/chroot/chkrootkit/rkhunter. Windows: Windows 2000/2003/2008, NTFS chkdisk/acls etc, Troubleshoot system/application faults using Event logs, Updates via WSUS, Terminal Services, IIS Fundamentals Fundamentals: Basic DNS & Networking, TCP/UDP, IP Routing, HA & Load Balancing Concepts. Application Protocols: SMTP,HTTP,FTP,IMAP,POP. Shifts: Must be willing to work in shifts (including at night and on holidays). Good To Have Understanding of Cloud Systems/Hardware: RAID, LOM/IPMI/IP KVMs, Dell Hardware. Windows:WMI, Powershell/VB scripts, MS-SQL Fundamentals. Applications: Postfix/qmail/Exim, Database Systems Fundamentals (MySQL/Postgres),Nginx/Apache (mod_php, mod_fcgid, CGI, php-fpm etc), Tomcat. Tools/Utilities: Nagios, DHCP, Kickstart/Cobbler, Yum, RPM, GIT/SVN Others: Regular expressions, Rescue Kits like TRK, etc. Certification: Red Hat Certified Engineer (RHCE), GCP Why you’ll love us. We’ve evolved; we provide three work environment scenarios. You can feel like a Newfolder in a work from-home, hybrid, or work-from-the-office environment. Work-life balance. Our work is thrilling and meaningful, but we know balance is key to living well. We celebrate one another’s differences . We’re proud of our culture of diversity and inclusion. We foster a culture of belonging. Our company and customers benefit when employees bring their authentic selves to work. We have programs that bring us together on important issues and provide learning and development opportunities for all employees. We have 20 + affinity groups where you can network and connect with Newfolders globally. We care about you. At Newfold, taking care of our employees is our top priority. we make sure that cutting edge benefits are in place to for you. Some of the benefits you will have: We have partnered with some of the best insurance providers to provide you excellent Health Insurance options, Education/ Certification Sponsorships to give you a chance to further your knowledge, Flexi-leaves to take personal time off and much more. Building a community one domain at a time, one employee at a time. All our employees are eligible for a free domain and WordPress blog as we sponsor the domain registration costs. Where can we take you? We’re fans of helping our employees learn different aspects of the business, be challenged with new tasks, be mentored, and grow their careers. Unfold new possibilities with #teamnewfold! This Job Description includes the essential job functions required to perform the job described above, as well as additional duties and responsibilities. This Job Description is not an exhaustive list of all functions that the employee performing this job may be required to perform. The Company reserves the right to revise the Job Description at any time, and to require the employee to perform functions in addition to those listed above. Show more Show less
Posted 1 day ago
10.0 years
0 Lacs
Delhi, India
On-site
Title of the Position: Consultant (IT) (On Contract) No. of Positions: 01 (UR) Qualification: BE/B.Tech. (Computer Science Engineering/Information Technology) /M.Tech/ MCA or equivalent from a recognized university Experience Required: Should have at least 10 years of post-qualification experience in PHP and the Laravel framework. The ideal candidate should have a strong background in developing scalable web applications, RESTful APIs, and secure systems. The following skills are desired: Backend Development: Expert in PHP & Laravel framework and SQL scripting. API Development: Proficient in designing and integrating RESTful APIs. Frontend Technologies: Well versed with HTML, CSS, JavaScript, jQuery. Database Management: Extensive experience with Oracle databases. Version Control: Proficiency in Git/GitHub. Security: Knowledge of token-based authentication and data security using Apache. DevOps Tools: Experience with Docker and Linux-based environments & CI/CD pipeline. Cloud Services: Must have hands-on experience with Oracle Cloud implementation. Should have experience of other ongoing technologies viz. Node.js, JSON. Preferred Skills: Strong problem-solving and analytical skills. Excellent communication and team collaboration abilities. Experience in ESG domains is advantageous. Ability to mentor junior developers and review code. Experience with Postgres database and Microsoft Power BI is advantageous. Key Objectives and Responsibilities: Design, develop, and maintain applications using PHP and the Laravel framework. Build, consume and integrate RESTful APIs. Collaborate with front-end developers to integrate user-facing elements using HTML, CSS, JavaScript, and jQuery. Develop and manage complex Oracle database systems and ensure data integrity. Implement secure authentication and authorization mechanisms (e.g., token-based systems). Integrate third-party APIs and manage end-to-end API lifecycle. Ensure secure and scalable implementation of applications using Apache, with attention to data protection. Work with version control systems such as GitHub for code management and collaboration. Utilize JSON for data interchange between systems. Implement and manage containerized applications using Docker. Deploy and maintain applications on Oracle Cloud Infrastructure (OCI). Work in a Linux-based development and deployment environment. Maintain high standards of code quality and unit testing. Mentor junior team members, review their code, and provide valuable insights to help resolve issues. Independently create CI/CD pipeline & design applications and troubleshoot critical issues with a structured problem-solving approach. HOW TO APPLY: Candidates fulfilling the above eligibility criteria may submit their Resume through email at contract@ifciltd.com . Please write “Title of the Position” in the subject of the e-mail . Kindly enclose the self-attested photocopies of the following documents in the email: Proof of date of Birth Educational Certificates Relevant Experience certificates (containing areas and period of service) In case of reserved category candidates, updated Caste Certificate may be provided. Note: LAST DATE FOR SUBMISSION THROUGH E-MAIL IS JUNE 26, 2025. Show more Show less
Posted 1 day ago
5.0 - 12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Greetings from Kellton Tech!! Job Title : Java & ADF Developer / Java with Springboot Location : Hyderabad (Onsite – Client Location) Experience : 5-12 years Employment Type : Full-time / Contract (as applicable) Joining : Immediate to 30 days preferred About Kellton: We are a global IT services and digital product design and development company with subsidiaries that serve startup, mid-market, and enterprise clients across diverse industries, including Finance, Healthcare, Manufacturing, Retail, Government, and Nonprofits. At Kellton, we believe that our people are our greatest asset. We are committed to fostering a culture of collaboration, innovation, and continuous learning. Our core values include integrity, customer focus, teamwork, and excellence. To learn more about our organization, please visit us at www.kellton.com Are you craving a dynamic and autonomous work environment? If so, this opportunity may be just what you're looking for. At our company, we value your critical thinking skills and encourage your input and creative ideas to supply the best talent available. To boost your productivity, we provide a comprehensive suite of IT tools and practices backed by an experienced team to work with. Req 1: Java with Springboot Technical Skills: Java (should also be able to work on older versions – Versions 7 & 8) Spring Boot, Spring JPA, Spring Security MySQL IDEs: Primarily NetBeans, also Eclipse Jasper Reports Application Servers: Tomcat, JBoss (WildFly) Basic knowledge of Linux Day-to-Day Responsibilities: Handling API-related issues and bug fixes Developing new APIs and features as per business requirements Coordinating and deploying builds in UAT environments Collaborating with the QA and product teams to ensure smooth releases Addl Skillset Info: Java , Spring Boot , Hibernate , Junit , JWT, OAuth, Redis, Docker , Kafka (Optional) , Open api standards , Jenkins/Git Pipeline, etc Req:2 Java & Oracle ADF Developer About the Role: We are looking for a skilled Java and Oracle ADF Developer to join our team for an on-site deployment at our client’s location in Hyderabad. The ideal candidate should have a solid background in Java development, Oracle ADF, and associated tools and technologies, strong problem-solving abilities, and experience working in a Linux-based environment. Key Responsibilities Develop and maintain enterprise-grade applications using Oracle ADF and Java 7/8 . Design and implement reports using Jasper Reports and iReport . Manage deployments and configurations on the JBoss application server. Work with development tools such as NetBeans , Eclipse , or JDeveloper . Perform data management tasks using MySQL . Write and maintain Shell scripts and configure cron jobs for scheduled tasks. Administer and monitor systems in a Linux environment. Utilize Apache Superset for data visualization and dashboard reporting. Collaborate with cross-functional teams to deliver high-quality solutions on time. Troubleshoot issues and provide timely resolutions. Required Skills Proficiency in Java 7/8 and object-oriented programming Strong hands-on experience with Oracle ADF Expertise in Jasper Reports , iReport , and report generation Experience with JBoss server setup and application deployment Familiarity with NetBeans , Eclipse , or JDeveloper IDEs Good understanding of MySQL database design and queries Experience with Linux OS and shell scripting Ability to set up and manage cron jobs Knowledge of Apache Superset or similar BI tools Strong problem-solving and debugging skills Good to Have Exposure to Agile development practices Familiarity with REST APIs and web services Knowledge of version control tools (e.g., Git) Education Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. What we offer you: · Existing clients in multiple domains to work. · Strong and efficient team committed to quality output. · Enhance your knowledge and gain industry domain expertise by working in varied roles. · A team of experienced, fun, and collaborative colleagues · Hybrid work arrangement for flexibility and work-life balance (If the client/project allows) · Competitive base salary and job satisfaction. Join our team and become part of an exciting company where your expertise and ideas are valued, and where you can make a significant impact in the IT industry. Apply today! Interested applicants, please submit your detailed resume stating your current and expected compensation and notice period to srahaman@kellton.com Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Designation: Solution Architect Office Location: Gurgaon Position Description: As a Solution Architect, you will be responsible for leading the development and delivery of the platforms. This includes overseeing the entire product lifecycle from the solution until execution and launch, building the right team & close collaboration with business and product teams. Primary Responsibilities: Design end-to-end solutions that meet business requirements and align with the enterprise architecture. Define the architecture blueprint, including integration, data flow, application, and infrastructure components. Evaluate and select appropriate technology stacks, tools, and frameworks. Ensure proposed solutions are scalable, maintainable, and secure. Collaborate with business and technical stakeholders to gather requirements and clarify objectives. Act as a bridge between business problems and technology solutions. Guide development teams during the execution phase to ensure solutions are implemented according to design. Identify and mitigate architectural risks and issues. Ensure compliance with architecture principles, standards, policies, and best practices. Document architectures, designs, and implementation decisions clearly and thoroughly. Identify opportunities for innovation and efficiency within existing and upcoming solutions. Conduct regular performance and code reviews, and provide feedback to the development team members to improve professional development. Lead proof-of-concept initiatives to evaluate new technologies. Functional Responsibilities: Facilitate daily stand-up meetings, sprint planning, sprint review, and retrospective meetings. Work closely with the product owner to priorities the product backlog and ensure that user stories are well-defined and ready for development. Identify and address issues or conflicts that may impact project delivery or team morale. Experience with Agile project management tools such as Jira and Trello. Required Skills: Bachelor's degree in Computer Science, Engineering, or related field. 7+ years of experience in software engineering, with at least 3 years in a solution architecture or technical leadership role. Proficiency with AWS or GCP cloud platform. Strong implementation knowledge in JS tech stack, NodeJS, ReactJS, Experience with JS stack - ReactJS, NodeJS. Experience with Database Engines - MySQL and PostgreSQL with proven knowledge of Database migrations, high throughput and low latency use cases. Experience with key-value stores like Redis, MongoDB and similar. Preferred knowledge of distributed technologies - Kafka, Spark, Trino or similar with proven experience in event-driven data pipelines. Proven experience with setting up big data pipelines to handle high volume transactions and transformations. Experience with BI tools - Looker, PowerBI, Metabase or similar. Experience with Data warehouses like BigQuery, Redshift, or similar. Familiarity with CI/CD pipelines, containerization (Docker/Kubernetes), and IaC (Terraform/CloudFormation). Good to Have: Certifications such as AWS Certified Solutions Architect, Azure Solutions Architect Expert, TOGAF, etc. Experience setting up analytical pipelines using BI tools (Looker, PowerBI, Metabase or similar) and low-level Python tools like Pandas, Numpy, PyArrow Experience with data transformation tools like DBT, SQLMesh or similar. Experience with data orchestration tools like Apache Airflow, Kestra or similar. Work Environment Details: About Affle: Affle is a global technology company with a proprietary consumer intelligence platform that delivers consumer engagement, acquisitions, and transactions through relevant Mobile Advertising. The platform aims to enhance returns on marketing investment through contextual mobile ads and also by reducing digital ad fraud. While Affle's Consumer platform is used by online & offline companies for measurable mobile advertising, its Enterprise platform helps offline companies to go online through platform-based app development, enablement of O2O commerce and through its customer data platform. Affle India successfully completed its IPO in India on 08. Aug.2019 and now trades on the stock exchanges (BSE: 542752 & NSE:AFFLE). Affle Holdings is the Singapore based promoter for Affle India and its investors include Microsoft, Bennett Coleman &Company (BCCL) amongst others. For more details: www.affle.com About BU : Ultra - Access deals, coupons, and walled gardens based user acquisition on a single platform to offer bottom-funnel optimization across multiple inventory sources. For more details, please visit: https://www.ultraplatform.io/ Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Total 6 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred Technical And Professional Experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Show more Show less
Posted 1 day ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Overview: The person will be responsible for expanding and optimizing our data and data pipeline architecture. The ideal candidate is an experienced data pipeline builder who enjoys optimizing data systems and building them from the ground up. You’ll be Responsible for ? Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Cloud technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. You’d have? We are looking for a candidate with 3+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: Experience with data pipeline and workflow management tools: Apache Airflow, NiFi, Talend etc. • Experience with relational SQL and NoSQL databases, including Clickhouse, Postgres and MySQL. Experience with stream-processing systems: Storm, Spark-Streaming, Kafka etc. Experience with object-oriented/object function scripting languages: Python, Scala, etc. Experience building and optimizing data pipelines, architectures and data sets. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. Working knowledge of message queuing, stream processing, and highly scalable data stores Why Join us? Impactful Work: Play a pivotal role in safeguarding Tanla's assets, data, and reputation in the industry. Tremendous Growth Opportunities: Be part of a rapidly growing company in the telecom and CPaaS space, with opportunities for professional development. Innovative Environment: Work alongside a world-class team in a challenging and fun environment, where innovation is celebrated. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive environment for all employees. www.tanla.com Show more Show less
Posted 1 day ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
About Us Newfold Digital is a leading web technology company serving nearly 7 million customers globally. Established in 2021 through the combination of leading web services providers Endurance Web Presence and Web.com Group, Newfold’s mission is to empower success in a connected world with a focus on helping businesses of all sizes thrive online. The company's world-class family of brands includes BlueHost, HostGator, iPage, Domain.com, A Small Orange, MOJO Marketplace, BigRock, and ResellerClub. What you’ll do? Participate in 24x7 shifts Monitor the stability of our products with various internal tools. L1 Support ownership of all hosting products (cPanel/Plesk/VPS/Cloud/Dedicated). Handle incident response, troubleshooting, and fix for various products/services. Handle escalations as per policies/procedures. Get different internal/external groups together to resolve production site issues effectively. Communicate clearly on tickets and phone calls made to the team about various issues. Exhibit a sense of urgency to resolve issues. Build advanced automation workflows for automating repeated issues. Work with our infrastructure team to deploy and maintain Linux/Windows servers using automated scripts and a predefined runbook. Ensure SLA's and Operational standards are met. Raise tickets to different internal groups to resolve recurrent problems and alerts, and follow up on escalated issues. Liaison with engineering teams for RCA's, permanent resolutions on issues affecting production sites. Contribute to Operations handbook. Ensure smooth hand-offs between shifts. Who you are? Educational Qualifications: Graduate, preferably in Information Technology or Computer Science. Consistently strong academic performance. Linux: Good understanding of Linux Systems, Any Shell/Bash, sed/awk/grep/egrep, VI/VIM/Emacs, netstat, lsof, strace, ps/top/atop/dstat, grub boot config & systems rescue, fstab/disk labels, ext3/ext4, IPtables, sysstat (sar/vmstat/iostat etc), run levels & startup scripts, sudo/chroot/chkrootkit/rkhunter. Windows: Windows 2000/2003/2008, NTFS chkdisk/acls etc, troubleshoot system/application faults using Event logs, Updates via WSUS, Terminal Services, IIS Fundamentals Fundamentals: Basic DNS & Networking, TCP/UDP, IP Routing, HA & Load Balancing Concepts. Application Protocols: SMTP, HTTP, FTP,IMAP, POP. Shifts: Must be willing to work in shifts (including at night and on holidays). Good To Have Understanding of Cloud Systems/Hardware: RAID, LOM/IPMI/IP KVMs, Dell Hardware. Windows: WMI, Powershell/VB scripts, MS-SQL Fundamentals. Applications: Postfix/qmail/Exim, Database Systems Fundamentals (MySQL/Postgres),Nginx/Apache (mod_php, mod_fcgid, CGI, php-fpm etc), Tomcat. Tools/Utilities: Nagios, DHCP, Kickstart/Cobbler, Yum, RPM, GIT/SVN Others: Regular expressions, Rescue Kits like TRK, etc. Certification: Red Hat Certified Engineer (RHCE), GCP Why You’ll Love Us In this era of COVID-19, we believe in putting our employees first and keeping them safe. We were one of the first technology companies to make significant changes to our office environments and team interactions, including mandatory working from home and safety procedures to enter our office space. We are committed to no face-to-face interaction with our employees until the data shows it is entirely safe for our teams. Here is just a snippet of what we think you’ll love: Grow together. Our exciting virtual learning & development programs never cease to amaze us. Participate in our Expert Speak sessions/E-learning courses to grow professionally & personally. Work with creative & innovative teams. We believe in hiring the best of the best and are proud of being surrounded by people who think out of the box to only better our products, work & customer experiences. Did someone say free domain? Building a community one domain at a time, one employee at a time. All our employees are eligible for a free domain and WordPress blog as we sponsor the domain registration costs. Leave your worries aside! Juggling the demands of career and personal life can be stressful and challenging but don’t worry! Our employee assistance program services provide free, confidential, short-term counseling. This benefit is also extended to an immediate family member. This Job Description includes the essential job functions required to perform the job described above, as well as additional duties and responsibilities. This Job Description is not an exhaustive list of all functions that the employee performing this job may be required to perform. The Company reserves the right to revise the Job Description at any time, and to require the employee to perform functions in addition to those listed above. Show more Show less
Posted 1 day ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Show more Show less
Posted 1 day ago
14.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Requirements Description and Requirements Position Summary: A highly skilled Big Data (Hadoop) Administrator responsible for the installation, configuration, engineering, and architecture of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, scripting, and infrastructure-as-code for automating and optimizing operations is highly desirable. Experience in collaborating with cross-functional teams, including application development, infrastructure, and operations, is highly preferred. Job Responsibilities: Manages the design, distribution, performance, replication, security, availability, and access requirements for large and complex Big Data clusters. Designs and develops the architecture and configurations to support various application needs; implements backup, recovery, archiving, conversion strategies, and performance tuning; manages job scheduling, application release, cluster changes, and compliance. Identifies and resolves issues utilizing structured tools and techniques. Provides technical assistance and mentoring to staff in all aspects of Hadoop cluster management; consults and advises application development teams on security, query optimization, and performance. Writes scripts to automate routine cluster management tasks and documents maintenance processing flows per standards. Implement industry best practices while performing Hadoop cluster administration tasks. Works in an Agile model with a strong understanding of Agile concepts. Collaborates with development teams to provide and implement new features. Debugs production issues by analyzing logs directly and using tools like Splunk and Elastic. Address organizational obstacles to enhance processes and workflows. Adopts and learns new technologies based on demand and supports team members by coaching and assisting. Education: Bachelor’s degree in computer science, Information Systems, or another related field with 14+ years of IT and Infrastructure engineering work experience. Experience: 14+ Years Total IT experience & 10+ Years relevant experience in Big Data database Technical Skills: Big Data Platform Management : Big Data Platform Management: Expertise in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Data Infrastructure & Security : Proficient in designing and implementing robust data infrastructure solutions with a strong focus on data security, utilizing tools like Apache Ranger and Kerberos. Performance Tuning & Optimization : Skilled in performance tuning and optimization of big data environments, leveraging advanced techniques to enhance system efficiency and reduce latency. Backup & Recovery : Experienced in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity. Linux & Troubleshooting : Strong knowledge of Linux operating systems , with proven ability to troubleshoot and resolve complex technical issues, collaborating effectively with cross-functional teams. DevOps & Scripting : Proficient in scripting and automation using tools like Ansible, enabling seamless integration and automation of cluster operations. Experienced in infrastructure-as-code practices and observability tools such as Elastic. Agile & Collaboration : Strong understanding of Agile SAFe for Teams, with the ability to work effectively in Agile environments and collaborate with cross-functional teams. ITSM Process & Tools : Knowledgeable in ITSM processes and tools such as ServiceNow. Other Critical Requirements: Automation and Scripting : Proficiency in automation tools and programming languages such as Ansible and Python to streamline operations and improve efficiency. Analytical and Problem-Solving Skills : Strong analytical and problem-solving abilities to address complex technical challenges in a dynamic enterprise environment. 24x7 Support : Ability to work in a 24x7 rotational shift to support Hadoop platforms and ensure high availability. Team Management and Leadership : Proven experience managing geographically distributed and culturally diverse teams, with strong leadership, coaching, and mentoring skills. Communication Skills : Exceptional written and oral communication skills, with the ability to clearly articulate technical and functional issues, conclusions, and recommendations to stakeholders at all levels. Stakeholder Management : Prior experience in effectively managing both onshore and offshore stakeholders, ensuring alignment and collaboration across teams. Business Presentations : Skilled in creating and delivering impactful business presentations to communicate key insights and recommendations. Collaboration and Independence : Demonstrated ability to work independently as well as collaboratively within a team environment, ensuring successful project delivery in a complex enterprise setting. About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
Pune, Maharashtra, India
Remote
At NiCE, we don’t limit our challenges. We challenge our limits. Always. We’re ambitious. We’re game changers. And we play to win. We set the highest standards and execute beyond them. And if you’re like us, we can offer you the ultimate career opportunity that will light a fire within you. So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 8 to 12 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7326 Reporting into: Tech Manager Role Type: Individual Contributor About NiCE NICE Ltd. (NASDAQ: NICE) software products are used by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver extraordinary customer experiences, fight financial crime and ensure public safety. Every day, NiCE software manages more than 120 million customer interactions and monitors 3+ billion financial transactions. Known as an innovation powerhouse that excels in AI, cloud and digital, NiCE is consistently recognized as the market leader in its domains, with over 8,500 employees across 30+ countries. NiCE is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, marital status, ancestry, neurotype, physical or mental disability, veteran status, gender identity, sexual orientation or any other category protected by law. Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Mulshi, Maharashtra, India
On-site
Company Description We at Prometteur Solutions Pvt. Ltd. are a team of IT experts, who came with a promise of delivering technology-empowered business solutions. We provide world-class software and web development services that focus on playing a supportive role to your business and its holistic growth. Our highly-skilled associates and global delivery capabilities ensure the accessibility and scale to align client's technology solutions with their business needs. Our offerings span the entire IT lifecycle: from Consulting through Packaged, Custom, and Cloud Applications as well as a variety of Infrastructure Services. Job Description Experience- 7+ Years Location: Bangalore NP: Immediate Joiner 7+ years of Software development experience Good “Go” implementation capabilities. Understanding of different design principles. Good understanding of Linux OS - memory, instruction processing, filesystem, system daemons etc. Fluent with linux command line and shell scripting. Working knowledge of servers (nginx, apache, etc.), proxy-servers, and load balancing. Understanding of service based architecture and microservices. Working knowledge of AV codecs, MpegTS and adaptive streaming like Dash, HLS. Good understanding of computer networking concepts. Working knowledge of relational Databases. Good analytical and debugging skills. Knowledge of git or any other source code management. Good to Have Skills: Working knowledge of Core Java and Python are preferred. Exposure to cloud computing is preferred. Exposure to API or video streaming performance testing is preferred. Preferred experience in Elasticsearch and Kibana (ELK Stack) Qualifications Bachelor’s degree in Computer Science Engineering or a related field Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
At NiCE, we don’t limit our challenges. We challenge our limits. Always. We’re ambitious. We’re game changers. And we play to win. We set the highest standards and execute beyond them. And if you’re like us, we can offer you the ultimate career opportunity that will light a fire within you. So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 5 to 7 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7556 Reporting into: Tech Manager Role Type: Individual Contributor About NiCE NICE Ltd. (NASDAQ: NICE) software products are used by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver extraordinary customer experiences, fight financial crime and ensure public safety. Every day, NiCE software manages more than 120 million customer interactions and monitors 3+ billion financial transactions. Known as an innovation powerhouse that excels in AI, cloud and digital, NiCE is consistently recognized as the market leader in its domains, with over 8,500 employees across 30+ countries. NiCE is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, marital status, ancestry, neurotype, physical or mental disability, veteran status, gender identity, sexual orientation or any other category protected by law. Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Mohali district, India
On-site
Job Summary: We are seeking a skilled Database & ETL Developer with strong expertise in SQL, data modeling, cloud integration, and reporting tools. The ideal candidate will be responsible for designing scalable database architectures, optimizing complex queries, working with modern ETL tools, and delivering insightful dashboards and reports. This role requires collaboration across multiple teams and the ability to manage tasks in a dynamic, agile environment. Experience: 7+ Years Location: Mohali (Work from Office Only) Key Responsibilities: Design and implement normalized and scalable database schemas using ER modeling techniques. Develop, maintain, and optimize stored procedures, triggers, views, and advanced SQL queries (e.g., joins, subqueries, indexing). Execute database backup, restore, and recovery operations ensuring data integrity and high availability. Optimize SQL performance through indexing strategies, execution plans, and query refactoring. Lead and support cloud migration and integration projects involving platforms such as AWS and Azure. Implement and manage data lake architectures such as AWS HealthLake and AWS Glue pipelines. Create and manage interactive dashboards and business reports using Power BI, Amazon QuickSight, or Tableau. Collaborate with cross-functional teams and use tools like JIRA, Azure Boards, ClickUp, or Trello for task and project tracking. Required Skills: Strong experience with SQL Server or PostgreSQL , including advanced T-SQL programming. In-depth knowledge of ER modeling and relational database design principles. Proficiency in query optimization , indexing, joins, and subqueries. Hands-on experience with modern ETL and data integration platforms such as Airbyte, Apache Airflow, Azure Data Factory (ADF), and AWS Glue . Understanding of Data Lake / Health Lake architectures and their role in cloud data ecosystems. Proficiency with reporting tools like Power BI, Amazon QuickSight, or Tableau . Experience with database backup, restore , and high availability strategies . Familiarity with project/task tracking tools such as JIRA, Azure Boards, ClickUp, or Trello . Soft Skills: Strong verbal and written communication skills. Excellent problem-solving and troubleshooting abilities. Self-motivated with the ability to manage priorities and work independently across multiple projects. Nice to Have: Certification in cloud platforms (AWS, Azure). Exposure to healthcare data standards and compliance (e.g., HIPAA, FHIR). Company overview: smartData is a leader in global software business space when it comes to business consulting and technology integrations making business easier, accessible, secure and meaningful for its target segment of startups to small & medium enterprises. As your technology partner, we provide both domain and technology consulting and our inhouse products and our unique productized service approach helps us to act as business integrators saving substantial time to market for our esteemed customers. With 8000+ projects, vast experience of 20+ years, backed by offices in the US, Australia, and India, providing next door assistance and round-the-clock connectivity, we ensure continual business growth for all our customers. Our business consulting and integrator services via software solutions focus on important industries of healthcare, B2B, B2C, & B2B2C platforms, online delivery services, video platform services, and IT services. Strong expertise in Microsoft, LAMP stack, MEAN/MERN stack with mobility first approach via native (iOS, Android, Tizen) or hybrid (React Native, Flutter, Ionic, Cordova, PhoneGap) mobility stack mixed with AI & ML help us to deliver on the ongoing needs of customers continuously. For more information, visit http://www.smartdatainc.com Show more Show less
Posted 1 day ago
0 years
0 Lacs
Raipur, Chhattisgarh, India
On-site
Role Summary We are seeking a highly motivated and skilled Data Engineer to join our data and analytics team. This role is ideal for someone with strong experience in building scalable data pipelines, working with modern lakehouse architectures, and deploying data solutions on Microsoft Azure. You’ll be instrumental in developing, orchestrating, and maintaining our real-time and batch data infrastructure using tools like Apache Spark, Apache Kafka, Apache Airflow, Azure Data Services, and modern DevOps practices. Key Responsibilities Design and implement ETL/ELT data pipelines for structured and unstructured data using Azure Data Factory, Databricks, or Apache Spark. Work with Azure Blob Storage, Data Lake, and Synapse Analytics to build scalable data lakes and warehouses. Develop real-time data ingestion pipelines using Apache Kafka, Apache Flink, or Apache Beam. Build and schedule jobs using orchestration tools like Apache Airflow or Dagster. Perform data modeling using Kimball methodology for building dimensional models in Snowflake or other data warehouses. Implement data versioning and transformation using DBT and Apache Iceberg or Delta Lake. Manage data cataloging and lineage using tools like Marquez or Collibra. Collaborate with DevOps teams to containerize solutions using Docker, manage infrastructure with Terraform, and deploy on Kubernetes. Setup and maintain monitoring and alerting systems using Prometheus and Grafana for performance and reliability. Required Skills & Qualifications Programming & Scripting: Proficiency in Python, with strong knowledge of OOP and data structures & algorithms. Comfortable working in Linux environments for development and deployment. Database Technologies: Strong command over SQL and understanding of relational (DBMS) and NoSQL databases. Big Data & Real-Time Processing: Solid experience with Apache Spark (PySpark/Scala). Familiarity with real-time processing tools like Kafka, Flink, or Beam. Orchestration & Scheduling: Hands-on experience with Airflow, Dagster, or similar orchestration tools. Cloud Platform: Deep experience with Microsoft Azure, especially Azure Data Factory, Blob Storage, Synapse, Azure Functions, etc. AZ-900 or other Azure certifications are a plus. Lakehouse & Warehousing Knowledge of dimensional modeling, Snowflake, Apache Iceberg, and Delta Lake. Understanding of modern Lakehouse architecture and related best practices. Data Cataloging & Governance Familiarity with Marquez, Collibra, or other cataloging tools. DevOps & CI/CD Experience with Terraform, Docker, Kubernetes, and Jenkins or equivalent CI/CD tools. Monitoring & Logging Proficiency in setting up dashboards and alerts with Prometheus and Grafana. Note: - Immediate joiner will be preferred. Show more Show less
Posted 1 day ago
8.0 - 10.0 years
0 Lacs
Delhi, India
On-site
Location : Bengaluru / Delhi Reports To : Chief Revenue Officer Position Overview: We are looking for a highly motivated Pre-Sales Specialist to join our team at Neysa, a rapidly growing AI Cloud Platform company that's making waves in the industry. The role is a customer-facing technical position that will work closely with sales teams to understand client requirements, design tailored solutions and drive technical engagements. You will be responsible for presenting complex technology solutions to customers, creating compelling demonstrations, and assisting in the successful conversion of sales opportunities. Key Responsibilities: Solution Design & Customization : Work closely with customers to understand their business challenges and technical requirements. Design and propose customized solutions leveraging Cloud, Network, AI, and Machine Learning technologies that best fit their needs. Sales Support & Enablement : Collaborate with the sales team to provide technical support during the sales process, including delivering presentations, conducting technical demonstrations, and assisting in the development of proposals and RFP responses. Customer Engagement : Engage with prospects and customers throughout the sales cycle, providing technical expertise and acting as the technical liaison between the customer and the company. Conduct deep-dive discussions and workshops to uncover technical requirements and offer viable solutions. Proof of Concept (PoC) : Lead the technical aspects of PoC engagements, demonstrating the capabilities and benefits of the proposed solutions. Collaborate with the customer to validate the solution, ensuring it aligns with their expectations. Product Demos & Presentations : Deliver compelling product demos and presentations tailored to the customer’s business and technical needs, helping organizations unlock innovation and growth through AI. Simplify complex technical concepts to ensure that both business and technical stakeholders understand the value proposition. Proposal Development & RFPs : Assist in crafting technical proposals, responding to RFPs (Request for Proposals), and providing technical content that highlights the company’s offerings, differentiators, and technical value. Technical Workshops & Trainings : Facilitate customer workshops and training sessions to enable customers to understand the architecture, functionality, and capabilities of the solutions offered. Collaboration with Product & Engineering Teams : Provide feedback to product management and engineering teams based on customer interactions and market demands. Help shape future product offerings and improvements. Market & Competitive Analysis : Stay up-to-date on industry trends, new technologies, and competitor offerings in AI and Machine Learning, Cloud and Networking, to provide strategic insights to sales and product teams. Documentation & Reporting : Create and maintain technical documentation, including solution designs, architecture diagrams, and deployment plans. Track and report on pre-sales activities, including customer interactions, pipeline status, and PoC results. Key Skills and Qualifications: Experience : Minimum of 8-10 years of experience in a pre-sales or technical sales role, with a focus on AI, Cloud and Networking solutions. Technical Expertise : Solid understanding of Cloud computing, Data Center infrastructure, Networking (SDN, SD-WAN, VPNs), and emerging AI/ML technologies. Experience with architecture design and solutioning across these domains, especially in hybrid cloud and multi-cloud environments. Familiarity with tools such as Kubernetes, Docker, TensorFlow, Apache Hadoop, and machine learning frameworks. Sales Collaboration : Ability to work alongside sales teams, providing the technical expertise needed to close complex deals. Experience in delivering customer-focused presentations and demos. Presentation & Communication Skills : Exceptional ability to articulate technical solutions to both technical and non-technical stakeholders. Strong verbal and written communication skills. Customer-Focused Mindset : Excellent customer service skills with a consultative approach to solving customer problems. Ability to understand business challenges and align technical solutions accordingly. Having the mindset to build rapport with customers and become their trusted advisor. Problem-Solving & Creativity : Strong analytical and problem-solving skills, with the ability to design creative, practical solutions that align with customer needs. Certifications : Degree in Computer Science, Engineering, or a related field Cloud and AI / ML certifications are highly desirable Team Player : Ability to work collaboratively with cross-functional teams including product, engineering, and delivery teams. Preferred Qualifications: Industry Experience : Experience in delivering solutions in industries such as finance, healthcare, or telecommunications is a plus. Technical Expertise in AI/ML : A deeper understanding of AI/ML applications, including natural language processing (NLP), computer vision, predictive analytics, or data science use cases. Experience with DevOps Tools : Familiarity with CI/CD pipelines, infrastructure as code (IaC), and automation tools like Terraform, Ansible, or Jenkins. Show more Show less
Posted 1 day ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary We are seeking an experienced Data Architect with expertise in Snowflake, dbt, Apache Airflow, and AWS to design, implement, and optimize scalable data solutions. The ideal candidate will play a critical role in defining data architecture, governance, and best practices while collaborating with cross-functional teams to drive data-driven decision-making. Key Responsibilities Data Architecture & Strategy: Design and implement scalable, high-performance cloud-based data architectures on AWS. Define data modelling standards for structured and semi-structured data in Snowflake. Establish data governance, security, and compliance best practices. Data Warehousing & ETL/ELT Pipelines: Develop, maintain, and optimize Snowflake-based data warehouses. Implement dbt (Data Build Tool) for data transformation and modelling. Design and schedule data pipelines using Apache Airflow for orchestration. Cloud & Infrastructure Management: Architect and optimize data pipelines using AWS services like S3, Glue, Lambda, and Redshift. Ensure cost-effective, highly available, and scalable cloud data solutions. Collaboration & Leadership: Work closely with data engineers, analysts, and business stakeholders to align data solutions with business goals. Provide technical guidance and mentoring to the data engineering team. Performance Optimization & Monitoring: Optimize query performance and data processing within Snowflake. Implement logging, monitoring, and alerting for pipeline reliability. Required Skills & Qualifications 10+ years of experience in data architecture, engineering, or related roles. Strong expertise in Snowflake, including data modeling, performance tuning, and security best practices. Hands-on experience with dbt for data transformations and modeling. Proficiency in Apache Airflow for workflow orchestration. Strong knowledge of AWS services (S3, Glue, Lambda, Redshift, IAM, EC2, etc.). Experience with SQL, Python, or Spark for data processing. Familiarity with CI/CD pipelines, Infrastructure-as-Code (Terraform/CloudFormation) is a plus. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, etc.). Preferred Qualifications Certifications: AWS Certified Data Analytics – Specialty, Snowflake SnowPro Certification, or dbt Certification. Experience with streaming technologies (Kafka, Kinesis) is a plus. Knowledge of modern data stack tools (Looker, Power BI, etc.). Experience in OTT streaming could be added advantage. Show more Show less
Posted 1 day ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: IT Administrator with Networking & Server Administration Location : Hyderabad Experience : 6 months – 2 years Job Type: Paid Internship About Us: Instaresz Business Services Pvt Ltd is a forward-thinking, fast-growing technology company that thrives on innovative solutions. We are currently looking for an experienced IT Administrator who will take responsibility for managing and maintaining the network infrastructure, servers, and systems while ensuring smooth day-to-day IT operations across the organization. Key Responsibilities: Set up, configure, and maintain LAN/WAN networks, routers, switches, firewalls, and VPNs. Administer Windows/Linux servers, Active Directory, DNS, DHCP, and user access controls. Manage software and OS package installations using tools like apt, yum, dnf, and rpm. Monitor and troubleshoot network and system performance issues. Maintain web, file, mail, and database servers (Apache, Nginx, Postfix, MySQL, etc.). Implement and monitor IT security measures including firewalls, antivirus, and access policies. Perform system backups, restore processes, and support disaster recovery plans. Support virtualization platforms (VMware, Hyper-V) and assist with basic cloud infrastructure (AWS, Azure). Automate tasks using PowerShell or Bash scripting. Document IT procedures, configurations, and network diagrams. Required Skills & Qualifications: Proven Experience in IT system administration, networking, and server management. Hands-on Knowledge of networking protocols, IP addressing, subnetting, and VPNs. Experience with network devices such as routers, switches, and firewalls. Proficient in Windows Server (Active Directory, Group Policies, DNS, DHCP) and Linux administration (Ubuntu, CentOS, RHEL). In-depth knowledge of server administration , including web servers (Apache, Nginx), databases (MySQL, PostgreSQL), and mail servers (Postfix, Exchange). Experience with package management tools (apt, yum, dnf, rpm). Familiarity with cloud platforms (AWS, Azure) and virtualization tools (VMware, Hyper-V). Strong understanding of IT security practices , including firewalls, antivirus, VPNs, and access management. Scripting skills for automation (PowerShell, Bash). Excellent problem-solving and troubleshooting abilities. Preferred Certifications: CompTIA Network+ CompTIA Security+ Microsoft Certified: Windows Server / Azure Administrator Cisco Certified Network Associate (CCNA) Red Hat Certified System Administrator (RHCSA) ITIL Foundation (For IT Service Management) Additional Skills (Good to Have): Experience with containerization technologies (Docker, Kubernetes). Knowledge of Version Control Systems (Git). Why Join Us: Competitive salary and performance-based incentives Dynamic and collaborative work environment Opportunities for learning and growth Exposure to cutting-edge technologies and industry trends Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
India
Remote
About Lemongrass Lemongrass is a software-enabled services provider, synonymous with SAP on Cloud, focused on delivering superior, highly automated Managed Services to Enterprise customers. Our customers span multiple verticals and geographies across the Americas, EMEA and APAC. We partner with AWS, SAP, Microsoft and other global technology leaders. We are seeking an experienced Cloud Data Engineer with a strong background in AWS, Azure, and GCP. The ideal candidate will have extensive experience with cloud-native ETL tools such as AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, and other ETL tools like Informatica, SAP Data Intelligence, etc. You will be responsible for designing, implementing, and maintaining robust data pipelines and building scalable data lakes. Experience with various data platforms like Redshift, Snowflake, Databricks, Synapse, Snowflake and others is essential. Familiarity with data extraction from SAP or ERP systems is a plus. Key Responsibilities: Design and Development: Design, develop, and maintain scalable ETL pipelines using cloud-native tools (AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, etc.). Architect and implement data lakes and data warehouses on cloud platforms (AWS, Azure, GCP). Develop and optimize data ingestion, transformation, and loading processes using Databricks, Snowflake, Redshift, BigQuery and Azure Synapse. Implement ETL processes using tools like Informatica, SAP Data Intelligence, and others. Develop and optimize data processing jobs using Spark Scala. Data Integration and Management: Integrate various data sources, including relational databases, APIs, unstructured data, and ERP systems into the data lake. Ensure data quality and integrity through rigorous testing and validation. Perform data extraction from SAP or ERP systems when necessary. Performance Optimization: Monitor and optimize the performance of data pipelines and ETL processes. Implement best practices for data management, including data governance, security, and compliance. Collaboration and Communication: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Collaborate with cross-functional teams to design and implement data solutions that meet business needs. Documentation and Maintenance: Document technical solutions, processes, and workflows. Maintain and troubleshoot existing ETL pipelines and data integrations. Qualifications Education: Bachelor’s degree in Computer Science, Information Technology, or a related field. Advanced degrees are a plus. Experience: 7+ years of experience as a Data Engineer or in a similar role. Proven experience with cloud platforms: AWS, Azure, and GCP. Hands-on experience with cloud-native ETL tools such as AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, etc. Experience with other ETL tools like Informatica, SAP Data Intelligence, etc. Experience in building and managing data lakes and data warehouses. Proficiency with data platforms like Redshift, Snowflake, BigQuery, Databricks, and Azure Synapse. Experience with data extraction from SAP or ERP systems is a plus. Strong experience with Spark and Scala for data processing. Skills: Strong programming skills in Python, Java, or Scala. Proficient in SQL and query optimization techniques. Familiarity with data modeling, ETL/ELT processes, and data warehousing concepts. Knowledge of data governance, security, and compliance best practices. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Qualifications: Experience with other data tools and technologies such as Apache Spark, or Hadoop. Certifications in cloud platforms (AWS Certified Data Analytics – Specialty, Google Professional Data Engineer, Microsoft Certified: Azure Data Engineer Associate). Experience with CI/CD pipelines and DevOps practices for data engineering Selected applicant will be subject to a background investigation, which will be conducted and the results of which will be used in compliance with applicable law. What we offer in return: Remote Working: Lemongrass always has been and always will offer 100% remote work Flexibility: Work where and when you like most of the time Training: A subscription to A Cloud Guru and generous budget for taking certifications and other resources you’ll find helpful State of the art tech: An opportunity to learn and run the latest industry standard tools Team: Colleagues who will challenge you giving the chance to learn from them and them from you Lemongrass Consulting is proud to be an Equal Opportunity and Affirmative Action employer. We do not discriminate on the basis of race, religion, color, national origin, religious creed, gender, sexual orientation, gender identity, gender expression, age, genetic information, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics Show more Show less
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Apache is a widely used software foundation that offers a range of open-source software solutions. In India, the demand for professionals with expertise in Apache tools and technologies is on the rise. Job seekers looking to pursue a career in Apache-related roles have a plethora of opportunities in various industries. Let's delve into the Apache job market in India to gain a better understanding of the landscape.
These cities are known for their thriving IT sectors and see a high demand for Apache professionals across different organizations.
The salary range for Apache professionals in India varies based on experience and skill level. - Entry-level: INR 3-5 lakhs per annum - Mid-level: INR 6-10 lakhs per annum - Experienced: INR 12-20 lakhs per annum
In the Apache job market in India, a typical career path may progress as follows: 1. Junior Developer 2. Developer 3. Senior Developer 4. Tech Lead 5. Architect
Besides expertise in Apache tools and technologies, professionals in this field are often expected to have skills in: - Linux - Networking - Database Management - Cloud Computing
As you embark on your journey to explore Apache jobs in India, it is essential to stay updated on the latest trends and technologies in the field. By honing your skills and preparing thoroughly for interviews, you can position yourself as a competitive candidate in the Apache job market. Stay motivated, keep learning, and pursue your dream career with confidence!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2