Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 - 5.0 years
6 - 13 Lacs
Gurgaon
On-site
Role: Data Engineer Experience: 3–5 Years Location: Gurgaon - Onsite Notice Period: Immediate Key Skills Required Python Apache Spark Databricks Machine Learning (basic to intermediate understanding) ETL/Data Pipelines SQL (nice to have) Role Overview We’re looking for a Data Engineer with 3–5 years of experience to work on building and optimizing data pipelines using Python and Spark, with hands-on experience in Databricks. The ideal candidate should also have exposure to implementing machine learning models and collaborating across teams to deliver scalable data solutions. Responsibilities Build and maintain efficient, scalable data pipelines using Python and Apache Spark. Work closely with analytics and engineering teams to develop data-driven solutions. Use Databricks for processing, analyzing, and visualizing large datasets. Apply machine learning techniques for data insights and automation. Improve performance, reliability, and quality of data infrastructure. Monitor data integrity across the entire data lifecycle. Required Qualifications Strong hands-on experience with Python and Apache Spark. Proficient in working with Databricks for engineering workflows. Good understanding of machine learning concepts and ability to implement models. Familiarity with ETL processes, data warehousing, and SQL. Strong communication and problem-solving skills. Educational Background BE/BTech/BIT/MCA/BCA or a related technical degree. Job Type: Full-time Pay: ₹600,000.00 - ₹1,300,000.00 per year Schedule: Morning shift Work Location: In person
Posted 4 hours ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
We’re seeking a talented and passionate Trainer to join our dynamic team in making a remarkable impact on the future of technology. The ideal candidate should have a strong base in technological concepts and a keen interest in delivery & mentoring. The role involves delivering best-in-class training sessions, supporting curriculum development, and providing hands-on guidance to learners. Responsibilities - What You’ll Do Training Coordination, Support & Delivery Assist in scheduling and coordinating training sessions Deliver classroom-based and virtual instructor-led training (ILT) sessions on various organizational products, platforms and technology Conduct hands-on training, workshops, and exercises to reinforce learning Manage training attendance records and assessments Learner Engagement Help ensuring access of relevant resources to learners Address learner queries by creating a positive learning environment Ensure smooth learning experience throughout the learning cycle Track learner’s progress through specific assessments and exercises Prepare learners for industry-standard certifications Curriculum Development Create structured learning paths for various experience levels Develop course materials, decks, and guides for training Update training content, available in various formats, based on industry trends and technological advancements, as and when applicable Prepare learners with practical applications of product offerings’ concepts Key Skills & Experience - What We’re Looking For Technical Skills Knowledge of any of the following technologies and industry advancements: Familiarity with GenAI Landscape, Machine Learning (ML), or a related area Proficiency in Data Engineering, Apache NiFi, Flow Files, Data Integration & Flow Management, ETL, and Data Warehousing concepts Knowledge of Python, SQL and other relevant programming languages Strong expertise in LCNC development (UI/UX Principles, Java, JavaScript frameworks) Experience with APIs and microservices Fundamental understanding of Web application development Training & Mentoring Skills Prior experience in conducting product-based or technology-based training sessions Ability to simplify complex technical concepts for easy understanding Must have delivery experience – both virtual and in-class trainings Excellent articulation, collaboration and mentoring skills Content Creation Experience in content creation and editing of training videos Qualifications & Experience Bachelor/Master’s degree in Computer Science, Engineering or a related field 5+ experience in cloud-based technologies or Artificial Intelligence (AI) Experience in training or coaching in a corporate or academic environment preferred Must have MS PowerPoint knowledge, Camtasia or other video editing skills Show more Show less
Posted 4 hours ago
3.0 years
0 Lacs
Gurgaon
On-site
Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About the role We are looking for a Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Data Engineer who has experience implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by transforming data to achieve actionable business outcomes. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals Demonstrate technical and domain knowledge of relational and non-relational databases, Data Warehouses, Data lakes among other structured and unstructured storage options Determine solutions that are best suited to develop a pipeline for a particular data source Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development Efficient in ELT/ETL development using Azure cloud services and Snowflake, including Testing and operational support (RCA, Monitoring, Maintenance) Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics deliver Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability) Stay current with and adopt new tools and applications to ensure high quality and efficient solutions Build cross-platform data strategy to aggregate multiple sources and process development datasets Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation Job Requirements Bachelor’s degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred 3+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment 3+ years of experience with setting up and operating data pipelines using Python or SQL 3+ years of advanced SQL Programming: PL/SQL, T-SQL 3+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads 3+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data 3+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions 3+ years of experience in defining and enabling data quality standards for auditing, and monitoring Strong analytical abilities and a strong intellectual curiosity. In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration, teamwork skills, excellent written and verbal communications skills Self-starter and motivated with ability to work in a fast-paced development environment Agile experience highly desirable Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools Preferred Skills Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management) Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance) Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting ADF, Databricks and Azure certification is a plus Technologies we use : Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake #LI-DS1
Posted 4 hours ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
🚀 We’re Hiring: Full Stack Developer (Java & Python) 📍 Location: Chennai | 🧠 Experience: 5–7 Years We're looking for a Full Stack Developer skilled in Java and Python to join our dynamic team in Chennai. You'll play a key role in building scalable, high-performance web applications, working across both backend and frontend technologies. 🔧 Responsibilities: Develop robust web apps using Java (Spring Boot, JPA, Hibernate) & Python (Flask) Design RESTful APIs and manage MySQL databases Deploy apps on Apache Tomcat; implement CI/CD using Docker & Jenkins Build responsive UIs with JSP, HTML, JavaScript, jQuery, Bootstrap Collaborate with cross-functional teams and follow secure coding practices ✅ Desired Skills: Min 5 years in full-stack development Strong hands-on in Java & Python frameworks Experience with REST APIs, SQL, Docker, Jenkins Solid understanding of OOP, design patterns, and Agile methodology Strong troubleshooting, multitasking, and communication skills 🎓 Education: Bachelor’s/Master’s in CS, IT, or related field Ready to code, collaborate, and create impact? Apply now! Show more Show less
Posted 4 hours ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
TransUnion's Job Applicant Privacy Notice What We'll Bring Dot net Full Stack Engineer What You'll Bring Key Responsibilities - Develop and maintain front-end & back-end components of our fraud detection platform. Implement real time data processing and streaming functionalities Design and develop APIs for integrating various microservices Collaborate with cross-functional teams to deliver high quality software solutions Participate in entire application lifecycle , focusing on coding, debugging and testing Ensure the implementation of security protocols and data protection measures Stay UpToDate with emerging trends and technologies in AI/ML, fraud detection, and full stack development Required Qualifications - Bachelors or Masters degree in Computer Science, Engineering or a related field. Minimum of 5 yrs. of experience as a .Net Full Stack Developer. Strong proficiency in programming languages such as .Net, ASP, C and C# Experience with data streaming and processing tools (e.g. Apache Kafka, Spark) Solid experience with RDBMS and NoSQL database concepts Experience with developing RESTful or GraphQL APIs Familiarity with cloud platforms (GCP,AWS, Azure) and containerization tools (Docker, Kubernetes) Strong analytical and problem-solving skills. Impact You'll Make NA This is a hybrid position and involves regular performance of job responsibilities virtually as well as in-person at an assigned TU office location for a minimum of two days a week. TransUnion Job Title Developer, Software Development Show more Show less
Posted 4 hours ago
0 years
0 Lacs
Gurgaon
On-site
Job Purpose As a key member of the support team, the Application Support Engineer is responsible for ensuring the stability and availability of critical applications. This role involves monitoring, troubleshooting, and resolving application issues, adhering to defined SLAs and processes. Desired Skills and experience Experience in an application support or technical support role with strong troubleshooting, problem-solving, and analytical skills. Ability to work independently and effectively and to thrive in a fast-paced, high-pressure environment. Experience in either C# or Java preferred, to support effective troubleshooting and understanding of application code Knowledge of various operating systems (Windows, Linux, macOS) and familiarity with software applications and tools used in the industry. Proficiency in programming languages such as Python, and scripting languages like Bash or PowerShell. Experience with database systems such as MySQL, Oracle, SQL Server, and the ability to write and optimize SQL queries. Understanding of network protocols, configurations, and troubleshooting network-related issues. Skills in managing and configuring servers, including web servers (Apache, Nginx) and application servers (Desirable) Familiarity with ITIL incident management processes. Familiarity with monitoring and logging tools like Nagios, Splunk, or ELK stack to track application performance and issues. Knowledge of version control systems like Git to manage code changes and collaborate with development teams. (Desirable) Experience with cloud platforms such as AWS, Azure, or Google Cloud for deploying and managing applications. (Desirable) Experience in Fixed Income Markets or financial applications support is preferred Strong attention to detail and ability to follow processes. Ability to adapt to changing priorities and client needs with good verbal and written communication skills. Key Responsibilities Provide L1/L2 technical support for applications Monitor application performance and system health, proactively identifying potential issues. Investigate, diagnose, and resolve application incidents and service requests within agreed SLAs. Escalate complex or unresolved issues to the Service Manager or relevant senior teams. Document all support activities, including incident details, troubleshooting steps, and resolutions. Participate in shift handovers and knowledge sharing. Perform routine maintenance tasks to ensure optimal application performance. Collaborate with other support teams to ensure seamless issue resolution. Develop and maintain technical documentation and knowledge base articles. Assist in the implementation of new applications and updates. Provide training and support to junior team members.
Posted 4 hours ago
3.0 years
0 - 0 Lacs
Mohali
On-site
Job description Job Title: Full Stack Developer (React JS & Node.js) Location: Mohali, Punjab Exp Required : 3 years Job Type: Full-Time Department: Software Development Preference : Local Candidates Company: LGS (Lakhera Global Services) About Us:LGS (Lakhera Global Services) is an innovative technology company based in Mohali, specializingin delivering high-quality software solutions to clients across the globe. We are dedicated to pushingthe boundaries of technology and are looking for a talented Full Stack Developer to join ourgrowing team. The ideal candidate will have strong experience in React JS, Node.js, andPostgreSQL and be ready to work on exciting and impactful projects.Role Overview:As a Full Stack Developer at LGS, you will be responsible for developing both front-end andback-end components of our web applications. You will work with cutting-edge technologies suchas React JS for the front-end, Node.js for the server-side, and PostgreSQL or Apache Cassandrafor database management. This is a fantastic opportunity to work across the entire stack andcontribute to innovative projects. Key Responsibilities: Frontend Development: Design and develop dynamic user interfaces using React JS to deliver high-quality,responsive, and interactive web applications. Work closely with UX/UI designers to translate wireframes and mockups into code. Implement state management using Redux or Context API. Optimize web applications for speed, scalability, and user experience. Backend Development: Develop server-side logic and APIs using Node.js and Express.js to support front end functionality. Handle server-side authentication, data processing, and integration with third-partyservices. Build and maintain scalable RESTful APIs and work with the front-end team toensure seamless integration. Database Management: Design and maintain relational databases using PostgreSQL, ensuring optimalperformance, data integrity, and security. Write efficient SQL queries for data retrieval and management. Implement database migrations, optimizations, and data backups. Collaboration & Code Quality: Participate in code reviews and collaborate with other developers to ensure high quality code and best practices. Maintain version control using Git and adhere to Agile development practices. Troubleshoot and debug issues across the full stack, ensuring the smooth operation ofapplications. Testing & Deployment: Write unit tests, integration tests, and perform end-to-end testing to ensureapplication reliability. Deploy applications to production environments using modern CI/CD practices. Continuously monitor and optimize performance, identifying bottlenecks andaddressing security vulnerabilities. Qualifications:1. Proven experience as a Full Stack Developer or similar role, with expertise in React JS,Node.js, and PostgreSQL pr Apache Cassandra. Strong proficiency in JavaScript (ES6+), HTML5, and CSS3. Hands-on experience with React JS and state management libraries like Redux or ContextAPI. Experience with Node.js and Express.js, Middleware,Jsonwebtoken for building server side applications and APIs. Strong knowledge of PostgreSQL or Apache Cassandra and experience designing andoptimizing relational databases. Experience with RESTful API development and integration. Familiarity with front-end build tools like Webpack, Babel, and npm/yarn. Experience with version control systems, particularly Git. Familiarity with unit testing and testing frameworks (e.g., Jest, Mocha). Knowledge of Agile/Scrum methodologies.Nice to Have: Familiarity with TypeScript. Experience with Docker, containerization, and cloud platforms (e.g., AWS, Heroku,Azure). Knowledge of GraphQL or other API technologies. Experience with Microservices architecture. Personal Attributes: Strong problem-solving skills and the ability to debug complex issues. Excellent communication skills, with a collaborative and team-oriented mindset. Self-motivated and proactive, with a passion for learning new technologies. Detail-oriented and focused on delivering clean, maintainable code. Interested candidates can share cv recruiter@lakheraglobalservices.com or contact us 98882 55570 Job Types: Full-time, Permanent Pay: ₹35,000.00 - ₹55,000.00 per month Benefits: Health insurance Paid sick time Schedule: Day shift Morning shift Work Location: In person
Posted 4 hours ago
3.0 - 5.0 years
10 - 12 Lacs
Mohali
On-site
We’re looking for a highly skilled Data Engineer to join our Digital Customer Solutions department at company. You’ll be at the forefront of building and automating data pipelines that drive performance and condition-based maintenance solutions across marine, power generation, and locomotive industries . This is your chance to work in an agile, tech-forward team leveraging sensor and time-series data to create advanced analytics solutions for global industrial applications. Develop and optimize scalable data transformation and management systems Automate and manage data pipelines using Apache Airflow Design APIs for seamless access to time-series and transactional data Collaborate with Data Scientists to ensure data availability and quality Improve existing systems to meet performance goals Integrate cloud-based infrastructure (preferably Microsoft Azure ) Proactively identify areas for data process enhancements Degree in a STEM field with 3–5 years of relevant experience Strong skills in Python , SQL , and Bash Experience with Cloud platforms (Azure preferred) Hands-on with Apache Airflow and pipeline automation Experience with time-series databases (e.g., InfluxDB) and PostgreSQL Fluent in English (written and verbal) Bonus: Experience in the marine, locomotive, or industrial analytics domain Work with global clients and real-world industrial data Must : Digital Customer Solutions Python | SQL | Airflow | Azure | PostgreSQL Job Types: Full-time, Permanent Pay: ₹1,000,000.00 - ₹1,200,000.00 per year Schedule: Day shift Monday to Friday Education: Bachelor's (Preferred) Experience: Python | SQL | Airflow | Azure | PostgreSQL : 2 years (Preferred) Work Location: In person
Posted 4 hours ago
3.0 years
0 Lacs
Delhi
Remote
Apache Superset Data Engineer Experience : 3 - 6 years Bhubaneswar, Delhi - NCR, Remote Working About the Job Featured The Apache Superset Data Engineer plays a key role in designing, developing, and maintaining scalable data pipelines and analytics infrastructure, with a primary emphasis on data visualization and dashboarding using Apache Superset. This role sits at the intersection of data engineering and business intelligence, enabling stakeholders to access accurate, actionable insights through intuitive dashboards and reports. Core Responsibilities Create, customize, and maintain interactive dashboards in Apache Superset to support KPIs, experimentation, and business insights Work closely with analysts, BI teams, and business users to gather requirements and deliver effective Superset-based visualizations Perform data validation, feature engineering, and exploratory data analysis to ensure data accuracy and integrity Analyze A/B test results and deliver insights that inform business strategies Establish and maintain standards for statistical testing, data validation, and analytical workflows Integrate Superset with various database systems (e.g., MySQL, PostgreSQL) and manage associated drivers and connections Ensure Superset deployments are secure, scalable, and high-performing Clearly communicate findings and recommendations to both technical and non-technical stakeholders Required Skills Proven expertise in building dashboards and visualizations using Apache Superset Strong command of SQL and experience working with relational databases like MySQL, or PostgreSQL Proficiency in Python (or Java) for data manipulation and workflow automation Solid understanding of data modelling, ETL/ELT pipelines, and data warehousing principles Excellent problem-solving skills and a keen eye for data quality and detail Strong communication skills, with the ability to simplify complex technical concepts for non-technical audiences Nice to have familiarity with cloud platforms (AWS, ECS) Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field 3+ yrs of relevant experience
Posted 4 hours ago
5.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Freelancing Opportunity Senior Linux Administrator Education: BE/B.Tech in Computer Science or Information Technology / MCA Experience: 5+ Years in IT Infrastructure Management Profile Summary: Highly experienced and results-driven Linux Administrator with over 8 years of progressive experience in managing, deploying, and maintaining complex Linux-based environments. Possess a strong background in system administration, network services, user access control, and open-source technologies. Proven expertise in handling RADIUS servers, LDAP authentication systems, and database administration in enterprise-level infrastructure. Adept at identifying and resolving issues, optimizing performance, and implementing robust security measures. Core Competencies: Operating Systems: Red Hat Enterprise Linux (RHEL), CentOS, Ubuntu, Debian, SUSE Authentication & Authorization: Installation and configuration of RADIUS servers (e.g., FreeRADIUS) for centralized network authentication Experience in setting up and managing LDAP (e.g., OpenLDAP, 389 Directory Server) for directory services and user management Database Management: Installation, configuration, tuning, and backup of databases such as MySQL, PostgreSQL, and MariaDB Integration of database backends with applications and monitoring for performance issues Networking & Security: Firewall configuration (iptables, firewalld), VPN setup, secure shell access, and network troubleshooting Implementation of security best practices (SELinux, auditd, logwatch, fail2ban) Scripting & Automation: Proficiency in Bash, Shell scripting, and basic Python for automating routine tasks and system monitoring Monitoring & Tools: Experience with tools like Nagios, Zabbix, Prometheus, Grafana for system and service monitoring Server & Application Support: Web servers (Apache, Nginx), DNS (BIND), DHCP, SMTP/IMAP (Postfix, Dovecot) System patching, kernel upgrades, and performance tuning Professional Experience Highlights: Led the migration of enterprise infrastructure from legacy systems to modern Linux distributions, reducing downtime by 30% Designed and deployed RADIUS-based network access control for over 500 users across distributed locations Implemented centralized LDAP user authentication integrated with multiple services including Samba, SSH, and internal portals Conducted regular database performance tuning and ensured high availability of critical applications Automated backup and restore procedures for databases and configuration files, improving disaster recovery readiness Certifications (if applicable): Red Hat Certified System Administrator (RHCSA) Red Hat Certified Engineer (RHCE) LPIC-1 / LPIC-2 (Linux Professional Institute Certification) CompTIA Linux+ Show more Show less
Posted 4 hours ago
4.0 - 6.0 years
0 Lacs
Bhubaneshwar
On-site
Position: Data Migration Engineer (NV46FCT RM 3324) Required Qualifications: 4–6 years of experience in data migration, data integration, and ETL development. Hands-on experience with both relational (PostgreSQL, MySQL, Oracle, SQL Server) and NoSQL (MongoDB, Cassandra, DynamoDB) databases Experience in Google BigQuery for data ingestion, transformation, and performance optimization. Proficiency in SQL and scripting languages such as Python or Shell for custom ETL logic. Familiarity with ETL tools like Talend, Apache NiFi, Informatica, or AWS Glue. Experience working in cloud environments such as AWS, GCP, or Azure. Solid understanding of data modeling, schema design, and transformation best practices. Preferred Qualifications: Experience in BigQuery optimization, federated queries, and integration with external data sources. Exposure to data warehouses and lakes such as Redshift, Snowflake, or BigQuery. Experience with streaming data ingestion tools like Kafka, Debezium, or Google Dataflow. Familiarity with workflow orchestration tools such as Apache Airflow or DBT. Knowledge of data security, masking, encryption, and compliance requirements in migration scenarios. Soft Skills: Strong problem-solving and analytical mindset with high attention to data quality. Excellent communication and collaboration skills to work with engineering and client teams. Ability to handle complex migrations under tight deadlines with minimal supervision. ******************************************************************************************************************************************* Job Category: Digital_Cloud_Web Technologies Job Type: Full Time Job Location: BhubaneshwarNoida Experience: 4-6 years Notice period: 0-30 days
Posted 4 hours ago
5.0 years
0 Lacs
Orissa
Remote
No. of Positions: 1 Position: Lead Data Engineer Location: Hybrid or Remote Total Years of Experience: 5+ years Key Responsibilities: Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Monitoring active ETL jobs in production. Build out data lineage artifacts to ensure all current and future systems are properly documented. Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes. Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies. Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations. Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. Hands-on experience in developing and implementing large-scale data warehouses. Business Intelligence and MDM solutions, including Data Lakes/Data Vaults. Required Skills: This job has no supervisory responsibilities. Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years’ experience in business analytics, data science, software development, data modeling or data engineering work. 5+ years’ experience with a strong proficiency with SQL query/development skills. Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks. Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory). Experience working in the healthcare industry with PHI/PII. Creative, lateral, and critical thinker. Excellent communicator. Well-developed interpersonal skills. Good at prioritizing tasks and time management. Ability to describe, create and implement new solutions. Experience with related or complementary open source software platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef). Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau). Don’t see a role that fits? We are growing rapidly and always on the lookout for passionate and smart engineers! If you are passionate about your career, reach out to us at careers@hashagile.com.
Posted 4 hours ago
10.0 years
8 - 10 Lacs
Lucknow
On-site
Job Title: Linux System Engineer (Tomcat/Apache/Patch Management) Location: Lucknow Work Mode: Onsite work from office GOVT project ( Cmmi Level 3 company) Experience: 10+ years Key Responsibilities: Administer, monitor, and troubleshoot Linux servers (RHEL/CentOS/Ubuntu) in production and staging environments. Configure, deploy, and manage Apache HTTP Server and Apache Tomcat applications. Perform regular patching, upgrades, and vulnerability remediation across Linux systems to maintain security compliance. Ensure availability, reliability, and performance of all server components. Maintain server hardening and compliance based on organization and industry standards. Automate routine tasks using shell scripting (Bash, Python preferred). Monitor system health using tools like Nagios, Zabbix, or similar. Collaborate with DevOps and Development teams for deployment and release planning. Support CI/CD pipelines and infrastructure provisioning (exposure to Jenkins, Ansible, Docker, Git, etc.). Document system configurations, procedures, and policies. Required Skills & Qualifications: 8 -10 years of hands-on experience in Linux Systems Administration. Strong expertise in Apache and Tomcat setup, tuning, and management. Experience with patch management tools (e.g., YUM, APT, Satellite, WSUS). Proficient in shell scripting (Bash, Python preferred). Familiarity with DevOps tools like Jenkins, Ansible, Git, Docker, etc. Experience in infrastructure monitoring and alerting tools. Strong troubleshooting and problem-solving skills. Understanding of basic networking and firewalls. Bachelor's degree in Computer Science, Information Technology, or related field. Preferred: Exposure to cloud platforms (AWS, Azure, GCP). Certification in Red Hat (RHCE/RHCSA) or Linux Foundation. Experience with infrastructure-as-code (Terraform, CloudFormation good to have). Job Types: Full-time, Permanent Pay: ₹800,000.00 - ₹1,000,000.00 per year Schedule: Day shift Work Location: In person Speak with the employer +91 9509902875
Posted 4 hours ago
0 years
0 Lacs
Noida
On-site
Job Description: Senior Software Engineer (backend) Our Purpose We work to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. We cultivate a culture of inclusion for all employees that respects their individual strengths, views, and experiences. We believe that our differences enable us to be a better team – one that makes better decisions, drives innovation and delivers better business results. Overview: Mastercard’s Network & Digital Payments group creates valued experiences for consumers, and enables merchants and issuers to reach consumers in ways not possible in the pre-digital world. The global product engineering team is responsible for development of a suite of foundational payment solutions. This an opportunity to be a part of a highly motivated team focused on bringing security, convenience and control to digital payments. Have you ever engineered a product to market on a global scale? Are you motivated to be a part of driving a world beyond cash? Are you agile with a desire to exhibit your skills at the cutting edge with a high standard for excellence? As a Senior Software Engineer, you will: create designs and drive trade-off discussions a small collaborative team of product managers and software engineers; devise and develop microservices-based solutions focused on achieving positive customer outcomes, customer journeys and customer experience; apply best practices from previous industry experience, whilst adhering to our own engineering principles of writing high quality, secure code that is modular, functional and testable; perfect automated pipelines for code development, extensive testing and deployment into cloud-based environments in order to deliver accelerated product development; take true ownership for your team’s software and actively engage in its maintenance characteristics, runtime properties and dependencies; including hardware, operating system and build; support production deployment, system maintenance and operations of the applications, even when things don’t go as expected or incidents occur; responding to incidents as a member of the on-call support team; identify incident root cause and ensure the remediation of issues; drive ongoing, measurable improvements to the maintenance, operational and runtime characteristics of your team’s software; provide feedback on peer designs, code, tests and automations, providing optimization and simplification recommendations; champion our adoption of technology standards and opinionated frameworks. You should apply if: you have solid experience with server-side, backend applications and technologies; you’re comfortable working in a team that’s exploring new ways of working; you enjoy sharing your knowledge and experience with your colleagues; you have strong foundation in algorithms, data structures, object-oriented development, cyber security, design patterns and core computer science concepts; you have expertise in modern software design principles, such as SOLID or DRY; you have experience in data modelling and database design for both relational and non-relational technologies; you have experience of RESTful API and stateless service design patterns and deployment; you have lots of experience developing using strongly-typed languages such as Java, Go, C, Scala, Kotlin, etc. – Java is the main language we use at Mastercard; you have developed for complex applications deployed in cloud platforms such as Azure, AWS or Google Cloud using containerization technologies such as Docker or Kubernetes; you’re skilled in testing according to the Test Pyramid principles; automating unit, integration, contract and journey tests; you can build robust testing strategies to minimize defects by regression, performance, deployment verification and release testing processes; and, you’re excited to build applications that have global impact and are used daily by 100s of millions of people. It would also be great if: you’ve used the Spring Framework components – it’s what we use to create our cloud-ready applications; you’ve used Pivotal Cloud Foundry to deploy cloud-ready applications; you’ve built Event Driven applications; you’ve built 12-factor applications; you have “Full-stack” experience and can solve all aspects of the technology problem to deliver a solution to a production environment; you have knowledge of technologies for databases, messaging, caches, API gateways, networking, pipelines, etc. – and preferably the products we use such as Jenkins, Splunk, Oracle, Apache Kafka, Redis and NATS. Our teams and values: We work within small collaborative teams consisting of software engineers and product managers; Our customer’s success is at the core of what we do; We are diverse and inclusive teams from many backgrounds and with many experiences; We believe in doing well by doing good through inclusive growth and making ethical and environmentally responsible decisions. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .
Posted 4 hours ago
7.0 years
0 Lacs
Udaipur
On-site
Requirements 7+ years of hands-on Python development experience Proven experience designing and leading scalable backend systems Expert knowledge of Python and at least one framework (e.g., Django, Flask) Familiarity with ORM libraries and server-side templating (Jinja2, Mako, etc.) Strong understanding of multi-threading, multi-process, and event-driven programming Proficient in user authentication, authorization, and security compliance Skilled in frontend basics: JavaScript, HTML5, CSS3 Experience designing and implementing scalable backend architectures and microservices Ability to integrate multiple databases, data sources, and third-party services Proficient with version control systems (Git) Experience with deployment pipelines, server environment setup, and configuration Ability to implement and configure queueing systems like RabbitMQ or Apache Kafka Write clean, reusable, testable code with strong unit test coverage Deep debugging skills and secure coding practices ensuring accessibility and data protection compliance Optimize application performance for various platforms (web, mobile) Collaborate effectively with frontend developers, designers, and cross-functional teams Lead deployment, configuration, and server environment efforts Job Type: Full-time Pay: Up to ₹4,500,000.00 per month Schedule: Day shift Supplemental Pay: Quarterly bonus Work Location: In person
Posted 4 hours ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Summary: We are seeking a highly skilled and experienced Data Scientist with a deep understanding of data analytics powered by artificial intelligence (AI) tools. The ideal candidate will be passionate about turning data into actionable insights using cutting-edge AI platforms, automation techniques, and advanced statistical methods. Key Responsibilities: Develop and deploy scalable AI-powered data analytics solutions for business intelligence, forecasting, and optimization. Leverage AI tools to automate data cleansing, feature engineering, model building, and visualization. Design and conduct advanced statistical analyses and machine learning models (supervised, unsupervised, NLP, etc.). Collaborate cross-functionally with engineering and business teams to drive data-first decision-making. Must-Have Skills & Qualifications: Minimum 4 years of professional experience in data science, analytics, or a related field. Proficiency in Python and/or R with strong hands-on experience in ML libraries (scikit-learn, XGBoost, TensorFlow, etc.). Expert knowledge of SQL and working with relational databases. Proven experience with data wrangling, data pipelines, and ETL processes. Deep Understanding of AI Tools for Data Analytics (Experience with several of the following is required): Data Preparation & Automation: Alteryx, Trifacta, KNIME AI/ML Platforms: DataRobot, H2O.ai, Amazon SageMaker, Azure ML Studio, Google Vertex AI Visualization & BI: Tableau, Power BI, Looker (with AI/ML integrations) AutoML & Predictive Modeling: Google AutoML, IBM Watson Studio, BigML NLP & Text Analytics: OpenAI (ChatGPT, Codex APIs), Hugging Face Transformers, MonkeyLearn Workflow Orchestration: Apache Airflow, Prefect Preferred Qualifications: Degree in Computer Science, Data Science, Statistics, or related field. Experience in cloud-based environments (AWS, GCP, Azure) for ML workloads. To apply, please send your resume to sooraj@superpe.in or shreya@superpe.in SuperPe is an equal opportunity employer and welcomes candidates of all backgrounds to apply. We look forward to hearing from you! Show more Show less
Posted 4 hours ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
CACI India, RMZ Nexity, Tower 30 4th Floor Survey No.83/1, Knowledge City Raidurg Village, Silpa Gram Craft Village, Madhapur, Serilingampalle (M), Hyderabad, Telangana 500081, India Req #1097 02 May 2025 CACI International Inc is an American multinational professional services and information technology company headquartered in Northern Virginia. CACI provides expertise and technology to enterprise and mission customers in support of national security missions and government transformation for defense, intelligence, and civilian customers. CACI has approximately 23,000 employees worldwide. Headquartered in London, CACI Ltd is a wholly owned subsidiary of CACI International Inc., a publicly listed company on the NYSE with annual revenue in excess of US $6.2bn. Founded in 2022, CACI India is an exciting, growing and progressive business unit of CACI Ltd. CACI Ltd currently has over 2000 intelligent professionals and are now adding many more from our Hyderabad and Pune offices. Through a rigorous emphasis on quality, the CACI India has grown considerably to become one of the UKs most well-respected Technology centres. About Data Platform The Data Platform will be built and managed “as a Product” to support a Data Mesh organization. The Data Platform focusses on enabling decentralized management, processing, analysis and delivery of data, while enforcing corporate wide federated governance on data, and project environments across business domains. The goal is to empower multiple teams to create and manage high integrity data and data products that are analytics and AI ready, and consumed internally and externally. What does a Data Infrastructure Engineer do? A Data Infrastructure Engineer will be responsible to develop, maintain and monitor the data platform infrastructure and operations. The infrastructure and pipelines you build will support data processing, data analytics, data science and data management across the CACI business. The data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment. You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform. You will be able to design architectures and create re-useable solutions to reflect the business needs. Responsibilities Will Include Collaborating across CACI departments to develop and maintain the data platform Building infrastructure and data architectures in Cloud Formation, and SAM. Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, step functions, Apache Airflow Monitoring and reporting on the data platform performance, usage and security Designing and applying security and access control architectures to secure sensitive data You Will Have 3+ years of experience in a Data Engineering role. Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift. Experience administrating databases and data platforms Good coding discipline in terms of style, structure, versioning, documentation and unit tests Strong proficiency in Cloud Formation, Python and SQL Knowledge and experience of relational databases such as Postgres, Redshift Experience using Git for code versioning, and lifecycle management Experience operating to Agile principles and ceremonies Hands-on experience with CI/CD tools such as GitLab Strong problem-solving skills and ability to work independently or in a team environment. Excellent communication and collaboration skills. A keen eye for detail, and a passion for accuracy and correctness in numbers Whilst not essential, the following skills would also be useful: Experience using Jira, or other agile project management and issue tracking software Experience with Snowflake Experience with Spatial Data Processing More About The Opportunity The Data Engineer is an excellent opportunity, and CACI Services India reward their staff well with a competitive salary and impressive benefits package which includes: Learning: Budget for conferences, training courses and other materials Health Benefits: Family plan with 4 children and parents covered Future You: Matched pension and health care package We understand the importance of getting to know your colleagues. Company meetings are held every quarter, and a training/work brief weekend is held once a year, amongst many other social events. CACI is an equal opportunities employer. Therefore, we embrace diversity and are committed to a working environment where no one will be treated less favourably on the grounds of their sex, race, disability, sexual orientation religion, belief or age. We have a Diversity & Inclusion Steering Group and we always welcome new people with fresh perspectives from any background to join the group An inclusive and equitable environment enables us to draw on expertise and unique experiences and bring out the best in each other. We champion diversity, inclusion and wellbeing and we are supportive of Veterans and people from a military background. We believe that by embracing diverse experiences and backgrounds, we can collaborate to create better outcomes for our people, our customers and our society. Other details Pay Type Salary Apply Now Show more Show less
Posted 4 hours ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role - AWS Java Developer Experience - 4-8 years Location - PAN India Required Skill : Primary; AWS+Java Sprintboot- Secondary- NodeJs , Typescript ,JSLT, AWS (Various Services) ,Git, Maven, Docker, New Relic, SQL, DBA JD : AWS Java Node JS Primary Required Skills Highly proficient in Java and NodeJS. Must have hands-on experience and deep understanding of Cloud Technologies: Microservices/API, AWS, IAM, S3, EFS, Amazon SQS, Amazon SNS, AWS APIs, AWS CLI, Amazon Kinesis, Apache Kafka, CloudFormation, Serverless Good understanding of relational databases like MySQL, PostgreSQL. Exposure to NoSQL systems like Redis/Mongo DB is a plus. Good understanding of web technologies such as JavaScript, HTML5, CSS. Good understanding of search platform such as Elastic Search is required A good understanding of Agile development methodologies. Hands on experience in improving MySQL queries and server response time is must. Good understanding of version control tools like Git, Subversion is required. Good understanding of Docker, Kubernetes, Jenkins, CI/CD Tools. Familiarity of working with TDD in JS with the help of frameworks like Jasmine, Mocha, Chai, Karma etc. is a plus. Excellent Analytical skills Good verbal and written communication Show more Show less
Posted 4 hours ago
7.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Manager – Azure Data Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Managers (Big Data Architects) with strong technology and data understanding having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Develop standardized practices for delivering new products and capabilities using Big Data & cloud technologies, including data acquisition, transformation, analysis, Modelling, Governance & Data management skills Interact with senior client technology leaders, understand their business goals, create, propose solution, estimate effort, build architectures, develop and deliver technology solutions Define and develop client specific best practices around data management within a cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using ADB, ADF, PySpark, Python, Snypase Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Have managed team and have experience in end to end delivery Have experience of building technical capability and teams to deliver Skills And Attributes For Success Strong understanding & familiarity with all Cloud Ecosystem components Strong understanding of underlying Cloud Architectural concepts and distributed computing paradigms Experience in the development of large scale data processing. Experience with CI/CD pipelines for data workflows in Azure DevOps Hands-on programming experience in ADB, ADF, Synapse, Python, PySpark, SQL Hands-on expertise in cloud services like AWS, and/or Microsoft Azure eco system Solid understanding of ETL methodologies in a multi-tiered stack with Data Modelling & Data Governance Experience with BI, and data analytics databases Experience in converting business problems/challenges to technical solutions considering security, performance, scalability etc. Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Strong stakeholder, client, team, process & delivery management skills To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 8-11 years industry experience Ideally, you’ll also have Project management skills Client management skills Solutioning skills Nice to have: Knowledge in data security best practices Knowledge in Data Architecture Design Patterns What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 4 hours ago
15.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction Joining the IBM Technology Expert Labs teams means you’ll have a career delivering world-class services for our clients. As the ultimate expert in IBM products, you’ll bring together all the necessary technology and services to help customers solve their most challenging problems. Working in IBM Technology Expert Labs means accelerating the time to value confidently and ensuring speed and insight while our clients focus on what they do best—running and growing their business. Excellent onboarding and industry-leading learning culture will set you up for a positive impact, while advancing your career. Our culture is collaborative and experiential. As part of a team, you will be surrounded by bright minds and keen co-creators—always willing to help and be helped—as you apply passion to work that will positively impact the world around us. Your Role And Responsibilities As a Delivery Consultant, you will work closely with IBM clients and partners to design, deliver, and optimize IBM Technology solutions that align with your clients’ goals. In this role, you will apply your technical expertise to ensure world-class delivery while leveraging your consultative skills such as problem-solving issue- / hypothesis-based methodologies, communication, and service orientation skills. As a member of IBM Technology Expert Labs, a team that is client focused, courageous, pragmatic, and technical, you’ll collaborate with clients to optimize and trailblaze new solutions that address real business challenges. If you are passionate about success with both your career and solving clients’ business challenges, this role is for you. To help achieve this win-win outcome, a ‘day-in-the-life’ of this opportunity may include, but not be limited to… Solving Client Challenges Effectively: Understanding clients’ main challenges and developing solutions that helps them reach true business value by working thru the phases of design, development integration, implementation, migration and product support with a sense of urgency . Agile Planning and Execution: Creating and executing agile plans where you are responsible for installing and provisioning, testing, migrating to production, and day-two operations. Technical Solution Workshops: Conducting and participating in technical solution workshops. Building Effective Relationships: Developing successful relationships at all levels —from engineers to CxOs—with experience of navigating challenging debate to reach healthy resolutions. Self-Motivated Problem Solver: Demonstrating a natural bias towards self-motivation, curiosity, initiative in addition to navigating data and people to find answers and present solutions. Collaboration and Communication: Strong collaboration and communication skills as you work across the client, partner, and IBM team. Preferred Education Bachelor's Degree Required Technical And Professional Expertise In-depth knowledge of the IBM Data & AI portfolio. 15+ years of experience in software services 10+ years of experience in the planning, design, and delivery of one or more products from the IBM Data Integration, IBM Data Intelligence product platforms Experience in designing and implementing solution on IBM Cloud Pak for Data, IBM DataStage Nextgen, Orchestration Pipelines 10+ years’ experience with ETL and database technologies, Experience in architectural planning and implementation for the upgrade/migration of these specific products Experience in designing and implementing Data Quality solutions Experience with installation and administration of these products Excellent understanding of cloud concepts and infrastructure Excellent verbal and written communication skills are essential Preferred Technical And Professional Experience Experience with any of DataStage, Informatica, SAS, Talend products Experience with any of IKC, IGC,Axon Experience with programming languages like Java/Python Experience in AWS, Azure Google or IBM cloud platform Experience with Redhat OpenShift Good to have Knowledge: Apache Spark , Shell scripting, GitHub, JIRA Show more Show less
Posted 4 hours ago
3.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Job Responsibilities: Contributing in all phases of the development lifecycle Writing well designed, testable, efficient code Review, test and debug code Understand and follow standards, guidelines, and best practices adopted in the project Participating in Agile-Scrum development Should have good communication skills Technical Responsibilities: Proven experience as a Full Stack Developer or similar role. Experience developing desktop and mobile applications. 3+ years experience with Java. Broad experience with various Java development frameworks such as Spring, Spring MVC, Spring Boot, Spring REST. Experience with ORM technologies such as Spring Data JPA, Hibernate/ JDBC Familiarity with databases (e.g. MySQL), web servers (e.g. Apache, Jetty) and UI/UX design. 2+ years experience in multiple front-end languages and libraries (e.g. HTML/ CSS, JavaScript, Ajax, jQuery, React Js). Experience with DevOps practices. Experience with GitLab, Jenkins, Maven and Linux. An analytical mind with good communication and teamwork skills Knowledge of Memory leakage issues and how to solve them. Version Control - Git Show more Show less
Posted 4 hours ago
89.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Full-time Company Description GFK - Growth from Knowledge. For over 89 years, we have earned the trust of our clients around the world by solving critical questions in their decision-making process. We fuel their growth by providing a complete understanding of their consumers’ buying behavior, and the dynamics impacting their markets, brands and media trends. In 2023, GfK combined with NIQ, bringing together two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights - delivered with advanced analytics through state-of-the-art platforms - GfK drives “Growth from Knowledge”. Job Description It's an exciting time to be a builder. Constant technological advances are creating an exciting new world for those who understand the value of data. The mission of NIQ’s Media Division is to turn NIQ into the global leader that transforms how consumer brands plan, activate and measure their media activities. Recombine is the delivery area focused on maximising the value of data assets in our NIQ Media Division. We apply advanced statistical and machine learning techniques to unlock deeper insights, whilst integrating data from multiple internal and external sources. Our teams develop data integration products across various markets and product areas, delivering enriched datasets that power client decision-making. Role Overview We are looking for a Principal Software Engineer for our Recombine delivery area to provide technical leadership within our development teams, ensuring best practices, architectural coherence, and effective collaboration across projects. This role is ideal for a highly experienced engineer who can bridge the gap between data engineering, data science, and software engineering, helping teams build scalable, maintainable, and well-structured data solutions. As a Principal Software Engineer, you will play a hands-on role in designing and implementing solutions while mentoring developers, influencing technical direction, and driving best practices in software and data engineering. This role includes line management responsibilities, ensuring the growth and development of team members. The role will be working within an AWS environment, leveraging the power of cloud-native technologies and modern data platforms Key Responsibilities Technical Leadership & Architecture Act as a technical architect, ensuring alignment between the work of multiple development teams in data engineering and data science. Design scalable, high-performance data processing solutions within AWS, considering factors such as governance, security, and maintainability. Drive the adoption of best practices in software development, including CI/CD, testing strategies, and cloud-native architecture. Work closely with Product Owners to translate business needs into technical solutions. Hands-on Development & Technical Excellence Lead by example through high-quality coding, code reviews, and proof-of-concept development. Solve complex engineering problems and contribute to critical design decisions. Ensure effective use of AWS services, including AWS Glue, AWS Lambda, Amazon S3, Redshift, and EMR. Develop and optimise data pipelines, data transformations, and ML workflows in a cloud environment. Line Management & Team Development Provide line management to engineers, ensuring their professional growth and development. Conduct performance reviews, set development goals, and mentor team members to enhance their skills. Foster a collaborative and high-performing engineering culture, promoting knowledge sharing and continuous improvement beyond team boundaries. Support hiring, onboarding, and career development initiatives within the engineering team. Collaboration & Cross-Team Coordination Act as the technical glue between data engineers, data scientists, and software developers, ensuring smooth integration of different components. Provide mentorship and guidance to developers, helping them level up their skills and technical understanding. Work with DevOps teams to improve deployment pipelines, observability, and infrastructure as code. Engage with stakeholders across the business, translating technical concepts into business-relevant insights. Governance, Security & Data Best Practices Champion data governance, lineage, and security across the platform. Advocate for and implement scalable data architecture patterns, such as Data Mesh, Lakehouse, or event-driven pipelines. Ensure compliance with industry standards, internal policies, and regulatory requirements. Qualifications Requirements & Experience Strong software engineering background with experience in designing and building production-grade applications in Python, Scala, Java, or similar languages. Proven experience with AWS-based data platforms, specifically AWS Glue, Redshift, Athena, S3, Lambda, and EMR. Expertise in Apache Spark and AWS Lake Formation, with experience building large-scale distributed data pipelines. Experience with workflow orchestration tools like Apache Airflow or AWS Step Functions. Cloud experience in AWS, including containerisation (Docker, Kubernetes, ECS, EKS) and infrastructure as code (Terraform, CloudFormation). Strong knowledge of modern software architecture, including microservices, event-driven systems, and distributed computing. Experience leading teams in an agile environment, with a strong understanding of CI/CD pipelines, automated testing, and DevOps practices. Excellent problem-solving and communication skills, with the ability to engage with both technical and non-technical stakeholders. Proven line management experience, including mentoring, career development, and performance management of engineering teams. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion I'm interested I'm interested Privacy Policy Show more Show less
Posted 4 hours ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Eviden, part of the Atos Group, with an annual revenue of circa € 5 billion is a global leader in data-driven, trusted and sustainable digital transformation. As a next generation digital business with worldwide leading positions in digital, cloud, data, advanced computing and security, it brings deep expertise for all industries in more than 47 countries. By uniting unique high-end technologies across the full digital continuum with 47,000 world-class talents, Eviden expands the possibilities of data and technology, now and for generations to come. Roles and Responsibility The Senior Tech Lead - Databricks leads the design, development, and implementation of advanced data solutions. Has To have extensive experience in Databricks, cloud platforms, and data engineering, with a proven ability to lead teams and deliver complex projects. Responsibilities Lead the design and implementation of Databricks-based data solutions. Architect and optimize data pipelines for batch and streaming data. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and deliverables. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in Databricks environments. Stay updated on the latest Databricks features and industry trends. Key Technical Skills & Responsibilities Experience in data engineering using Databricks or Apache Spark-based platforms. Proven track record of building and optimizing ETL/ELT pipelines for batch and streaming data ingestion. Hands-on experience with Azure services such as Azure Data Factory, Azure Data Lake Storage, Azure Databricks, Azure Synapse Analytics, or Azure SQL Data Warehouse. Proficiency in programming languages such as Python, Scala, SQL for data processing and transformation. Expertise in Spark (PySpark, Spark SQL, or Scala) and Databricks notebooks for large-scale data processing. Familiarity with Delta Lake, Delta Live Tables, and medallion architecture for data lakehouse implementations. Experience with orchestration tools like Azure Data Factory or Databricks Jobs for scheduling and automation. Design and implement the Azure key vault and scoped credentials. Knowledge of Git for source control and CI/CD integration for Databricks workflows, cost optimization, performance tuning. Familiarity with Unity Catalog, RBAC, or enterprise-level Databricks setups. Ability to create reusable components, templates, and documentation to standardize data engineering workflows is a plus. Ability to define best practices, support multiple projects, and sometimes mentor junior engineers is a plus. Must have experience of working with streaming data sources and Kafka (preferred) Eligibility Criteria Bachelor’s degree in Computer Science, Data Engineering, or a related field Extensive experience with Databricks, Delta Lake, PySpark, and SQL Databricks certification (e.g., Certified Data Engineer Professional) Experience with machine learning and AI integration in Databricks Strong understanding of cloud platforms (AWS, Azure, or GCP) Proven leadership experience in managing technical teams Excellent problem-solving and communication skills Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences Attractive Salary Hybrid work culture Let’s grow together. Show more Show less
Posted 4 hours ago
0 years
0 Lacs
Mumbai Metropolitan Region
Remote
With Confluent, organisations can harness the full power of continuously flowing data to innovate and win in the modern digital world. We have a purpose that drives us to do better everyday – we're creating an entirely new category within data infrastructure - data streaming. This technology will allow every organisation to create experiences and use the power of data in ways that profoundly impact the way we all live. This impact is our purpose and drives us to do better every day. One Confluent. One team. One Data Streaming Platform. Data Connects Us. About The Role We are looking for a Senior Consulting Engineer to join our Customer Success and Professional Services team. As a Consulting Engineer, you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges. Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments. What You Will Do Helping a customer determine their platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc. Providing feedback to the Confluent Product and Engineering groups Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality Testing performance and functionality of new components developed by Engineering Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiences Honing your skills, building applications, or trying out new product features Participating in community and industry events What You Will Bring Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems Experience with Java Virtual Machine (JVM) tuning and troubleshooting Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.) Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions Ability to quickly learn new technologies Ability and willingness to travel up to 20% of the time to meet with customers Come As You Are At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact. Employment decisions are made on the basis of job-related criteria without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other classification protected by applicable law. Click HERE to review our Candidate Privacy Notice which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees. Show more Show less
Posted 5 hours ago
3.0 - 4.0 years
0 Lacs
Surat, Gujarat, India
On-site
Job Title - DevOps Engineer Location - Surat (On-site ) Experience - 3-4 years Job Summary: We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. DevOps Engineer responsibilities include deploying product updates, identifying production issues, and implementing integrations that meet customer needs. If you have a solid background in software engineering and are familiar with Ruby or Python, we’d like to meet you. Ultimately, you will execute and automate operational processes quickly, accurately, and securely. Roles & Responsibilities: Strong experience with essential DevOps tools and technologies including Kubernetes , Terraform , Azure DevOps , Jenkins , Maven , Git , GitHub , and Docker . Hands-on experience in Azure cloud services , including: Virtual Machines (VMs) Blob Storage Virtual Network (VNet) Load Balancer & Application Gateway Azure Resource Manager (ARM) Azure Key Vault Azure Functions Azure Kubernetes Service (AKS) Azure Monitor, Log Analytics, and Application Insights Azure Container Registry (ACR) and Azure Container Instances (ACI) Azure Active Directory (AAD) and RBAC Creative in automating, configuring, and deploying infrastructure and applications across Azure environments and hybrid cloud data centers. Build and maintain CI/CD pipelines using Azure DevOps , Jenkins , and scripting for scalable SaaS deployments. Develop automation and infrastructure-as-code (IaC) using Terraform , ARM Templates , or Bicep for managing and provisioning cloud resources. Expert in managing containerized applications using Docker and orchestrating them via Kubernetes (AKS). Proficient in setting up monitoring , logging , and alerting systems using Azure-native tools and integrating with third-party observability stacks. Experience implementing auto-scaling , load balancing , and high-availability strategies for cloud-native SaaS applications. Configure and maintain CI/CD pipelines and integrate with quality and security tools for automated testing , compliance , and secure deployments . Deep knowledge in writing Ansible playbooks and ad hoc commands for automating provisioning and deployment tasks across environments. Experience integrating Ansible with Azure DevOps/Jenkins for configuration management and workflow automation. Proficient in using Maven and Artifactory for build management and writing POM.xml scripts for Java-based applications. Skilled in GitHub repository management , including setting up project-specific access, enforcing code quality standards, and managing pull requests. Experience with web and application servers such as Apache Tomcat for deploying and troubleshooting enterprise-grade Java applications. Ability to design and maintain scalable , resilient , and secure infrastructure to support rapid growth of SaaS applications. Qualifications & Requirements: Proven experience as a DevOps Engineer , Site Reliability Engineer , or in a similar software engineering role. Strong experience working in SaaS environments with a focus on scalability, availability , and performance . Proficiency in Python or Ruby for scripting and automation. Working knowledge of SQL and database management tools. Strong analytical and problem-solving skills with a collaborative and proactive mindset. Familiarity with Agile methodologies and ability to work in cross-functional teams . Show more Show less
Posted 5 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2