Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
2 - 3 Lacs
Noida
On-site
Job Description: We’re looking for a skilled DevOps Engineer to take ownership of our infrastructure and deployment processes. You will be responsible for managing our AWS environment, handling application hosting, and ensuring smooth, secure, and scalable system operations. Responsibilities: Manage and optimize AWS infrastructure and services (EC2, S3, RDS, Route53, IAM, etc.) Handle deployments using CI/CD tools (GitHub Actions, Jenkins, or similar) Set up and manage CRON jobs and background processes Configure and manage Nginx/Apache servers, including SSL setup and reverse proxies Host and maintain a variety of applications (Node.js, PHP, Python, etc.) Ensure system reliability, uptime, and network security Use tools like PM2 or similar for process management Collaborate with developers to automate deployment and improve development pipelines Monitor, log, and alert using tools like CloudWatch or third-party alternatives Requirements: Proven experience with AWS and cloud architecture Strong Linux/Unix system administration skills Familiarity with Nginx, Apache, SSL, reverse proxies Experience with CI/CD tools and scripting for automation Knowledge of Docker and container-based deployments (optional but preferred) Ability to troubleshoot performance and security issues Strong understanding of DevOps principles and Git workflows Job Types: Full-time, Permanent Pay: ₹250,000.00 - ₹350,000.00 per year Benefits: Leave encashment Application Question(s): How many years of hands-on experience do you have managing AWS infrastructure (e.g., EC2, S3, IAM, RDS) Have you managed application hosting and deployments for multi-language stacks (e.g., Node.js, PHP, Python)? Have you set up and maintained secure SSL configurations (manual or via tools like Certbot)? Are you comfortable managing Linux-based environments (e.g., via SSH, scripting, cronjobs)? What is your current CTC? What is your expected CTC? Are you comfortable in working in Noida? Work Location: In person
Posted 1 week ago
0 years
4 - 8 Lacs
Calcutta
On-site
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Senior Associate-Data Engineer, AWS! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master’s Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Senior Associate Primary Location India-Kolkata Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 25, 2025, 8:07:51 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 1 week ago
0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Deliver performance focused backend system solutions mostly in Java. Build and maintain new and existing applications using Java Object-Oriented software analysis and design Solid understanding of object oriented programming and data modelling Experience with networking, and distributed system Experience with and appreciation for automated testing Experience with cloud compute, virtualisation and automation, using Kubernetes and AWS Preferable if you have exposure with open-source applications, e.g. Cassandra and Apache Flink B.S./MS/PhD in Computer Science or related field or equivalent experience Proven experience solving problems in complex domains Proactively identify and manage risks, including assessing and controlling risks of various kinds and apply this appropriately to diverse situations Displays courage and willing to always contribute constructive feedback - not being afraid to highlight issues and challenges and bringing alternative solutions to the table
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Overview Intuit is seeking a Sr Data Scientist to join our Data sciences in Intuit India. The AI team develops game changing technologies and experiments that redefine and disrupt our current product offerings. You’ll be building and prototyping algorithms and applications on top of the collective financial data of 100 million consumers and small businesses. Applications will span multiple business lines, including personal finance, small business accounting, and tax. You thrive on ambiguity and will enjoy the frequent pivoting that’s part of the exploration. Your team will be very small and team members frequently wear multiple hats. In this position you will have close collaboration with the engineering and design teams, as well as the product and data teams in business units. Your role will range from research experimentalist to technology innovator to consultative business facilitator. You must be comfortable partnering with those directly involved with big data infrastructure, software, and data warehousing, as well as product management. What you'll bring MS or PhD in an appropriate technology field (Computer Science, Statistics, Applied Math, Operations Research, etc.). 2+ years of experience with data science for PhD and 5+ years for Masters. Experience in modern advanced analytical tools and programming languages such as R or Python with scikit-learn. Efficient in SQL, Hive, or SparkSQL, etc. Comfortable in Linux environment Experience in data mining algorithms and statistical modeling techniques such as clustering, classification, regression, decision trees, neural nets, support vector machines, anomaly detection, recommender systems, sequential pattern discovery, and text mining. Solid communication skills: Demonstrated ability to explain complex technical issues to both technical and non-technical audiences Preferred Additional Experience Apache Spark The Hadoop ecosystem Java HP Vertica TensorFlow, reinforcement learning Ensemble Methods, Deep Learning, and other topics in the Machine Learning community Familiarity with GenAI and other LLM and DL methods How you will lead Perform hands-on data analysis and modeling with huge data sets. Apply data mining, NLP, and machine learning (both supervised and unsupervised) to improve relevance and personalization algorithms. Work side-by-side with product managers, software engineers, and designers in designing experiments and minimum viable products. Discover data sources, get access to them, import them, clean them up, and make them “model-ready”. You need to be willing and able to do your own ETL. Create and refine features from the underlying data. You’ll enjoy developing just enough subject matter expertise to have an intuition about what features might make your model perform better, and then you’ll lather, rinse and repeat. Run regular A/B tests, gather data, perform statistical analysis, draw conclusions on the impact of your optimizations and communicate results to peers and leaders. Explore new design or technology shifts in order to determine how they might connect with the customer benefits we wish to deliver.
Posted 1 week ago
1.0 - 6.0 years
11 - 15 Lacs
Hyderabad
Work from Office
Skillsoft is the global leader in eLearning. Trusted by the world's leading organizations, including 65% of the Fortune 500. Our 100,000+ courses, videos and books are accessed over 100 million times every month, across more than 100 countries. At Skillsoft, we believe knowledge is the fuel for innovation and innovation is the fuel for business growth. Join us in our quest to democratize learning and help individuals unleash their edge. This role is for an Application Engineer and Senior AI Software Developer that will support internal teams and clients in the scoping, design, development, and implementation of application integration solutions, while also focusing on AI product enhancements to optimize support delivery and accelerate support case deflection. This role combines the responsibilities of an Application Engineer and a Senior Software Developer, leveraging AI technologies to improve customer interactions and service efficiency. Responsibilities: Some knowledge and experience of AI/ML software engineering Work with product owners and curators to understand requirements and guide new features Collaborate to identify new feature impacts on existing services and teams Research, prototype, and select appropriate COTS and in-house technology and design Collaborate with team to design, develop, and occasionally enhance or maintain existing systems Document designs and implementation to ensure consistency and alignment with standards Create documentation including system and sequence diagrams Create appropriate data pipelines for AI/ML Utilize and apply generative AI for products and for daily productivity Periodically explore new technologies and design patterns with proof-of-concept Occasionally present research and work to socialize and share knowledge across the organization Lead solution design initiatives to facilitate the delivery of Skillsoft’s services to client audiences. Implement client application integration solutions Work closely with client account management teams to ensure customer satisfaction with integrated solutions Utilize expertise in Microsoft .NET/ASP, JSON, Java, PHP, SQL administration, MS IIS, Apache, Tomcat, and common Internet communication protocols Environment, Tools & Technologies : Agile/Scrum Operating Systems – Mac, Linux Python, JavaScript, Node.js React UI/UX LLMs (OpenAI GPT-X, Claude, embedding models) Vector indexing/database, RAG, Agents APIs GraphQL, REST Docker, Kubernetes Amazon Web Services (AWS), MS Azure OpenAI SQL (Postgres RDS), NoSQL (Cassandra, Elasticsearch) Messaging – Kafka, RabbitMQ, SQS GitHub, IDE (your choice) Windows Active Directory Microsoft Office Suite applications Experience in web-based application development using common programming languages (Microsoft .NET/ASP, JSON, Java, PHP, etc.) Skills & Qualifications: Post-secondary education in Information Technology or an equivalent combination of training and experience Minimum 6+ years of software engineering development experience, including 3 years in a Technical Support or Customer Service role for an e-business company, internet service provider, or software vendor Ability to design and document APIs, data models, service interactions Familiarity or experience with: React development JavaScript testing strategies – unit, integration, system system and API security techniques data privacy concerns microservices architecture vertical vs horizontal scaling Attributes for Success: Proactive, independent, adaptable, and collaborative team player Excellent analytical and troubleshooting skills Strong problem-solving and analytical skills, understanding of various data structures and algorithms Ability to design and document APIs, data models, and service interactions Customer service minded with an ownership mindset Innovative and problem-solving mindset, passionate, curious, and open to new ideas
Posted 1 week ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Summary This position is responsible for designing highly complex modules, critical components or a whole application/product in its entirety. Has the vision to integrate it across multiple systems. This position works independently and is seen as a technical leader. The position is responsible for driving the design and development efforts related to architecture, scalability, availability and performance in alignment with the product/application roadmap. Job Description Roles and Responsibilities In This Role, You Will Be responsible for providing technical leadership and defining, developing, and evolving software in a fast paced and agile development environment using the latest software development m and infrastructure Provide guidance to developers with either planning and execution and/or design architecture using agile methodologies such as SCRUM Work with Product Line Leaders (PLLs) to understand product requirements & vision Drive increased efficiency across the teams, eliminating duplication, leveraging product and technology reuse Capture system level requirements by brainstorming with CTO, Sr. Architects, Data Scientists, Businesses & Product Managers Leads impact assessment and decision related to technology choices, design /architectural considerations and implementation strategy. Subject matter expert in processes and methodologies with ability to adapt and improvise in various situations. Expert in navigating through ambiguity and prioritizing conflicting asks. Expert level skills in design, architecture and development, with an ability to take a deep dive in the implementation aspects if the situation demands. Leads the architecture and design efforts across the product / multiple product versions and is an expert in architecting custom solutions off the base product. Expert in core data structures as well as algorithms and has the ability to implement them using language of choice when necessary – as a value offering. Education Qualification Bachelor's Degree in Computer Science or “STEM” Majors (Science, Technology, Engineering and Math) with 12+ Years of Experience. Desired Characteristics Technical Expertise 12+ year's experience relevant to software development, validation, architecting in industry space. Hands on with application software development in both monolithic and microservice architecture. Basic knowledge of UI/UX tools and development process. Expertise in C# (.NET Core) for architecting robust and scalable applications. Expertise in microservices architecture and containerization technologies such as Docker, Kubernetes etc. comfortable in building microservices with distributed systems. Deep understanding of data architecture, Data Mesh etc. Strong knowledge of data platforms (e.g., Databricks, Redshift). Strong Proficient in multiple databases (RDBMS,NOSQL, TSDB, Columnar Database) Experience in Apache Arrow/Apache Calcite Experience in Database Design and Architecture Proficient in implementing and optimizing database interactions, ensuring efficient and scalable data processing. Proficient with performance optimizations, secure coding, multi-threading, caching Proficient in design principles, design patterns, and debugging techniques. Proficient with cluster deployments, load-balancing, HA, redundancy Proficient in NUnit framework for unit testing. Proficient in Message Queueing and Event Streaming platforms like Kafka, Rabbit MQ Proficient with the CI/CD tools Proficient with monitoring tools like Grafana and Prometheus Autonomous and able to work asynchronously (due to time zone difference) Job Requirements Facilitates and coaches software engineering team sessions on requirements estimation and alternative approaches to team sizing and estimation. Leads a community of practice around estimation to share best practices among teams Knowledgeable about developments in various contexts, businesses, and industries. Quantifies effectiveness of design choices by gathering data. Drives accountability and adoption. Publishes guidance and documentation to promote adoption of design. Proposes design solutions based on research and synthesis; creates general design principles that capture the vision and critical concerns for a program. Demonstrates mastery of the intricacies of interactions and dynamics in Agile teams. Demonstrates advanced understanding of Lean Six Sigma principles (e.g., Black belt certified). Guides new teams to adopt Agile, troubleshoots adoption efforts, and guide continuous improvement. Provides training on Lean / Agile. Drives elimination of inefficiencies in coding process. Teaches XP practices to others. Actively embraces new methods and practices that increase efficiency and effectiveness. Business Acumen Evaluates technology to drive features and roadmaps. Maps technology trends to internal vision. Differentiates buzzwords from value proposition. Embraces technology trends that drive excellence beyond traditional practices (e.g., Test automation in lieu of traditional QA practices). Balances value propositions for competing stakeholders. Recommends a well-researched recommendation of buy vs. build solution. Conveys the value proposition for the company by assessing financial risks and gains of decisions and return on investment (ROI). Manages the process of building and maintaining a successful alliance. Understands and successfully applies common analytical techniques, including ROI, SWOT, and Gap analyses. Able to clearly articulate the business drivers relevant to a given initiative. Leadership Influences through others; builds direct and "behind the scenes" support for ideas. Pre-emptively sees downstream consequences and effectively tailors influencing strategy to support a positive outcome. Uses experts or other third parties to influence. Able to verbalize what is behind decisions and downstream implications. Continuously reflecting on success and failures to improve performance and decision-making. Understands when change is needed. Participates in technical strategy planning. Proactively identifies and removes project obstacles or barriers on behalf of the team. Able to navigate accountability in a matrixed organization. Communicates and demonstrates a shared sense of purpose. Learns from failure. Personal Attributes Able to effectively direct and mentor others in critical thinking skills. Proactively engages with cross-functional teams to resolve issues and design solutions using critical thinking and analysis skills and best practices. Finds important patterns in seemingly unrelated information. Influences and energizes other toward the common vision and goal. Maintains excitement for a process and drives to new directions of meeting the goal even when odds and setbacks render one path impassable. Innovates and integrates new processes and/or technology to significantly add value to GE. Identifies how the cost of change weighs against the benefits and advises accordingly. Proactively learns new solutions and processes to address seemingly unanswerable problems. Note Note To comply with US immigration and other legal requirements, it is necessary to specify the minimum number of years' experience required for any role based within the USA. For roles outside of the USA, to ensure compliance with applicable legislation, the JDs should focus on the substantive level of experience required for the role and a minimum number of years should NOT be used. This Job Description is intended to provide a high level guide to the role. However, it is not intended to amend or otherwise restrict/expand the duties required from each individual employee as set out in their respective employment contract and/or as otherwise agreed between an employee and their manager. Additional Information Relocation Assistance Provided: Yes
Posted 1 week ago
7.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Technical lead with minimum 7 years of Design and development experience in Microservices architecture. Extensive coding experience with CQRS, Axon Framework, Springboot, MongoDB, Spring Cloud, Hibernate and have passion for coding.Familiar with Open Shift platform, Kubernetes and good hands on experience with distributed micro services in cloud. Hands on experience in performance tuning, debugging, monitoring with the latest tool stack like Prometheus, Zipkin, hystrix, FluentD, Kibana, Grafana. Well versed with CI/CD principles, and actively involved in solving, troubleshooting issues in distributed services ecosystem . Familiar with Distributed services resiliency and monitoring in a production environment . Ensure quality of architecture and design of systems across the organization. Effectively research and benchmark technology against other best in class automated marketing technologies.Able to influence multiple teams on technical considerations, increasing their productivity and effectiveness by sharing deep knowledge and experience. Self-motivator and self-starter, Ability to own and drive things without supervision and works collaboratively with the teams across the organization. Have excellent soft skills and interpersonal skills to interact and present the ideas to Senior and Executive management .Be able to review the code for the best outcomes, retrofit the code and set up strong coding practices across teams.Orchestration using design patterns and tools not limited to rules engines Apache Camel, Drools. Understanding of containerization using Dockers and its implementations .Adhere to best practices, such as, Unit Testing, TDD and Continuous Deployment .Swagger API documentation
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
We are seeking a highly skilled and motivated Big Data Engineer to join our data engineering team. The ideal candidate will have hands-on experience with Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. You will be responsible for designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives.
Posted 1 week ago
6.0 years
0 Lacs
Andhra Pradesh, India
On-site
We are seeking a Senior Developer with expertise in SnapLogic and Apache Airflow to design, develop, and maintain enterprise-level data integration solutions. This role requires strong technical expertise in ETL development, workflow orchestration, and cloud technologies. You will be responsible for automating data workflows, optimizing performance, and ensuring the reliability and scalability of our data systems. Key Responsibilities include designing, developing, and managing ETL pipelines using SnapLogic, ensuring efficient data transformation and integration across various systems and applications. Leverage Apache Airflow for workflow automation, job scheduling, and task dependencies, ensuring optimized execution and monitoring. Work closely with cross-functional teams such as Data Engineering, DevOps, and Data Science to understand data requirements and deliver solutions. Collaborate in designing and implementing data pipeline architectures to support large-scale data processing in cloud environments like AWS, Azure, and GCP. Develop reusable SnapLogic pipelines and integrate with third-party applications and data sources including databases, APIs, and cloud services. Optimize SnapLogic pipeline performance to handle large volumes of data with minimal latency. Provide guidance and mentoring to junior developers in the team, conducting code reviews and offering best practice recommendations. Troubleshoot and resolve pipeline failures, ensuring high data quality and minimal downtime. Implement automated testing, continuous integration (CI), and continuous delivery (CD) practices for data pipelines. Stay current with new SnapLogic features, Airflow upgrades, and industry best practices. Required Skills & Experience include 6+ years of hands-on experience in data engineering, focusing on SnapLogic and Apache Airflow. Strong experience with SnapLogic Designer and SnapLogic cloud environment for building data integrations and ETL pipelines. Proficient in Apache Airflow for orchestrating, automating, and scheduling data workflows. Strong understanding of ETL concepts, data integration, and data transformations. Experience with cloud platforms like AWS, Azure, or Google Cloud and data storage systems such as S3, Azure Blob, and Google Cloud Storage. Strong SQL skills and experience with relational databases like PostgreSQL, MySQL, Oracle, and NoSQL databases. Experience working with REST APIs, integrating data from third-party services, and using connectors. Knowledge of data quality, monitoring, and logging tools for production pipelines. Experience with CI/CD pipelines and tools such as Jenkins, GitLab, or similar. Excellent problem-solving skills with the ability to diagnose issues and implement effective solutions. Ability to work in an Agile development environment. Strong communication and collaboration skills to work with both technical and non-technical teams.
Posted 1 week ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
P1,C3,STS 6+ years of extensive hands on Application Development work experience throughout the entire project lifecycle Development experience using Java8 or higher versions, Angular 8, Spring, Springboot, RESTful webservices, JMS/Kafka, Basics of Database Working knowledge in J2EE Servlets/JSP/XML, RESTful services Experience in Frameworks Spring, Springboot, Swagger Middleware JMS, Apache Tomcat, MQ, Kafka Oracle 10g/12c PLSQL OS - Unix commands and Shell scripting Web Technologies HTML, CSS, AJAX, JavaScript, JSON Expertise in Eclipse, SVN, GIT, Maven Scripting languages such as JavaScript 3rd party API and plugins Junit, log4j, Jackson, Findbugs, Checkstyle, PMD DevOps tools Jenkins, Maven, Sonar, Splunk Skills Java - J2EE, Multithreading, collections, Design patterns, Lambda, stream API, Functional programming, Servlets/JSP/XML, RESTful services Spring - Life cycle, Scope, DI, Springboot, Swagger Angular - Decent understanding of Angular features, Javascript, Typescript, HTML5, CSS, ngRX Oracle SQL - SQL, Joins, Performance query tuning, explain plan, stored procedures, SQL loader data modelling, Normalizations ORM (JPA/ Hibernate) CI/CD Pipeline - GitLab, Jenkins Development and Knowledge Base:Eclipse, IntelliJ, JIRA, Confluence Basic DB Awareness JMS/Kafka, workflow (camunda, etc) - nice to have
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title - ETL Developer - Informatica BDM/DEI 📍 Location : Onsite 🕒 Employment Type : Full Time 💼 Experience Level : Mid Senior Job Summary - We are seeking a skilled and results-driven ETL Developer with strong experience in Informatica BDM (Big Data Management) or Informatica DEI (Data Engineering Integration) to design and implement scalable, high-performance data integration solutions. The ideal candidate will work on large-scale data projects involving structured and unstructured data, and contribute to the development of reliable and efficient ETL pipelines across modern big data environments. Key Responsibilities Design, develop, and maintain ETL pipelines using Informatica BDM/DEI for batch and real-time data integration Integrate data from diverse sources including relational databases, flat files, cloud storage, and big data platforms such as Hive and Spark Translate business and technical requirements into mapping specifications and transformation logic Optimize mappings, workflows , and job executions to ensure high performance, scalability, and reliability Conduct unit testing and participate in integration and system testing Collaborate with data architects, analysts, and business stakeholders to understand requirements and deliver robust solutions Support data quality checks, exception handling, and metadata documentation Monitor, troubleshoot, and resolve ETL job issues and performance bottlenecks Ensure adherence to data governance and compliance standards throughout the development lifecycle Key Skills and Qualification 5-8 years of experience in ETL development with a focus on Informatica BDM/DEI Strong knowledge of data integration techniques , transformation logic, and job orchestration Proficiency in SQL , with the ability to write and optimize complex queries Experience working with Hadoop ecosystems (e.g., Hive, HDFS, Spark) and large-volume data processing Understanding of performance optimization in ETL and big data environments Familiarity with job scheduling tools and workflow orchestration (e.g., Control-M, Apache Airflow, Oozie) Good understanding of data warehousing , data lakes , and data modeling principles Experience working in Agile/Scrum environments Excellent analytical, problem-solving, and communication skills Good to have Experience with cloud data platforms (AWS Glue, Azure Data Factory, or GCP Dataflow) Exposure to Informatica IDQ (Data Quality) is a plus Knowledge of Python, Shell scripting, or automation tools Informatica or Big Data certifications
Posted 1 week ago
7.0 - 12.0 years
15 - 30 Lacs
Mumbai Suburban, Navi Mumbai, Mumbai (All Areas)
Work from Office
Role & responsibilities BI Product Development using open source framework Apache Superset hence check for their experience in any BI solution or best would be experience in Apache superset , They should have done multiple project on BI. The role need advanced python experience and Advanced SQL knowledge Hence check for their experience using Python and SQL for product development OR in project delivery , What was the use case which required advanced python experience and Advanced SQL knowledge Should be able to perform Advanced analytics like Fraud Analytic / Prediction and Forecasting . Check if they have any such advance analytics experience. Client Management experience for managing BI project. Preferred candidate profile
Posted 1 week ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
💼 We’re Hiring – Performance Test Automation Engineer (JMeter) 📍 Remote / Hybrid | 🧑💻 Experience: 2–5 Years |🚀 Immediate Joiners – Client Urgency! 🎯 Your Mission 🔸 Break systems before users do – automate load testing that scales 🔸 Own performance scripts and analysis across APIs and web apps 🛠️ Tech You’ll Use 🔧 Apache JMeter for scripting ⚙️ CI/CD pipelines – Jenkins / GitHub Actions 📊 Grafana, InfluxDB for monitoring 🌐 REST APIs & system-level performance testing 📁 Optional tools: BlazeMeter, Dynatrace, LoadRunner ✅ What We’re Looking For • 2–5 years in Performance Testing + Automation • Hands-on with JMeter scripting • Experience testing APIs under load • Knowledge of automation and test pipelines 🎁 Why Join Us? 🏠 Remote-first work 🕒 Flexible hours 💸 Competitive pay (Full-Time / Contract) 📈 Growth with modern, high-scale tech teams 📩 Apply Now! 📧Important: Also Send your CV to: info@qatechxperts.com ✉️ Subject: Performance QA – JMeter – Immediate Joiner 💥 Break barriers. Build confidence. Test for performance. #Hiring #PerformanceTesting #QAJobs #JMeter #RemoteJobs #AutomationEngineer #LoadTesting #ImmediateJoiners #QAWithImpact
Posted 1 week ago
6.0 - 11.0 years
22 - 27 Lacs
Pune, Bengaluru
Work from Office
Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Us At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company. We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us! Available Locations: Bengaluru About The Department The Growth Engineering team is responsible for building world-class experiences that help the millions of Cloudflare self-service customers get what they need faster, from acquisition and onboarding all the way through to adoption and scale up. Our team is focused on high velocity experimentation and thoughtful optimizations to that experience on Cloudflare’s properties. This team has a dual mandate, also focusing on evolving our current marketing attribution, customer event ingress and experimentation capabilities that process billions of events across those properties to drive data-driven decision making. As an engineer for the team responsible for Data Capture and Experimentation, your job will be to deliver on those growth-driven features and experiences while evolving our current marketing attribution, consumer event ingress and experimentation setup across these experiences, and partner with many teams on implementations. About The Role We are looking for experienced full-stack engineers to join the Experimentation and Data Capture team. The ideal candidate will have experience working with large-scale applications, familiarity with event-driven data capture, and strong understanding of system design. You must care deeply not only about the quality of your and the team's code, but also the customer experience and developer experience. We have a great opportunity to evolve our current data capture and experimentation systems to better serve our customers. We are also strong believers in dog-fooding our own products. From cache configuration to Cloudflare Access, Cloudflare Workers, and Zaraz, these are all tools in our engineer's tool belt, so it is a plus if you have been a customer of ours, even as a free user. What You’ll Do The Experimentation and Data Capture Engineering Team will be responsible for the following: Technical delivery for Experimentation and Data Capture capabilities intended for all of our customer-facing UI properties, driving user acquisition, engagement, and retention through data-driven strategies and technical implementations Collaborate with product, design and stakeholders to establish outcome measurements, roadmaps and key deliverables Own and lead execution of engineering projects in the area of web data acquisition and experimentation Work across the entire product lifecycle from conceptualization through production Build features end-to-end: front-end, back-end, IaC, system design, debugging and testing, engaging with feature teams and data processing teams Inspire and mentor less experienced engineers Work closely with the trust and safety team to handle any compliance or data privacy-related matters Examples Of Desirable Skills, Knowledge And Experience Comfort with building reusable SDKs and UI components with TypeScript/JavaScript required, comfort/familiarity with other languages (Go/Rust/Python) a plus. Experience building with high-scale serverless systems like Cloudflare Workers, AWS Lambda, Azure Functions, etc. Design and execute A/B tests and experiments to optimize for business KPIs, including user onboarding, feature adoption, and overall product experience. Create reusable components for other developers to leverage. Experience with publishing-to and querying-from data lake/warehouse products like Clickhouse, Apache Iceberg, to evaluate experiments. Familiarity with commercial analytics systems (Adobe Analytics, Google BigQuery, etc) a plus. Implement tracking and attribution systems to understand user behavior and measure the effectiveness of growth initiatives. Familiarity with event driven architectures, high-scale data processing, issues that can occur and how to protect against them. Familiarity with global data privacy requirements governed by laws like GDPR/CCPA/etc, and the implications for data capture, modeling, and analysis. Desire to work in a very fast-paced environment. What Makes Cloudflare Special? We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet. Project Galileo : Since 2014, we've equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost. Athenian Project : In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we've provided services to more than 425 local government election websites in 33 states. 1.1.1.1 : We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers. Sound like something you’d like to be a part of? We’d love to hear from you! This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license. Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Senior Consultant - Cloud Data Engineer Introduction to role Are you ready to disrupt an industry and change lives? Join us at a crucial stage of our journey in becoming a digital and data-led enterprise. As a Senior Consultant - Cloud Data Engineer, you will have the opportunity to lead and innovate, transforming our ability to develop life-changing medicines. Your work will directly impact patients, empowering the business to perform at its peak by combining ground breaking science with leading digital technology platforms and data. Accountabilities Lead the design, development, and maintenance of reliable, scalable data pipelines and ETL processes using tools such as SnapLogic, Snowflake, DBT, Fivetran, Informatica, and Python. Work closely with data scientists to understand model requirements and prepare the right data pipelines for training and deploying machine learning models. Collaborate with data scientists, analysts, and business teams to understand and optimize data requirements and workflows. Apply Power BI, Spotfire, Domo, Qlik Sense to create actionable data visualizations and reports that drive business decisions. Implement standard methodologies for version control and automation using Git Actions, Liquibase, Flyway, and CI/CD tools. Optimize data storage, processing, and integration bringing to bear AWS Data Engineering tools (e.g., AWS Glue, Amazon Redshift, Amazon S3, Amazon Kinesis, AWS Lambda, Amazon EMR). Troubleshoot, debug, and resolve issues related to existing data pipelines and architectures. Ensure data security, privacy, and compliance with industry regulations and organizational policies. Provide mentorship to junior engineers, offering guidance on best practices and supporting technical growth within the team. Essential Skills/Experience SnapLogic: Expertise in SnapLogic for building, managing, and optimizing both batch and real-time data pipelines. Proficiency in using SnapLogic Designer for designing, testing, and deploying data workflows. In-depth experience with SnapLogic Snaps (e.g., REST, SOAP, SQL, AWS S3) and Ultra Pipelines for real-time data streaming and API management. AWS: Strong experience with AWS Data Engineering tools, including AWS Glue, Amazon Redshift, Amazon S3, AWS Lambda, Amazon Kinesis, AWS DMS, and Amazon EMR. Expertise in cloud data architectures, data migration strategies, and real-time data processing on AWS platforms. Snowflake: Extensive experience in Snowflake cloud data warehousing, including data modeling, query optimization, and managing ETL pipelines using DBT and Snowflake-native tools. Fivetran: Proficient in Fivetran for automating data integration from various sources to cloud-based data warehouses, optimizing connectors for data replication and transformation. Real-Time Messaging and Stream Processing: Experience with real-time data processing frameworks (e.g., Apache Kafka, Amazon Kinesis, RabbitMQ, Apache Pulsar). Desirable Skills/Experience Exposure to other cloud platforms such as Azure or Google Cloud Platform (GCP). Familiarity with data governance, data warehousing, and data lake architectures. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we combine technology skills with a scientific mindset to make a meaningful impact. Our dynamic environment offers countless opportunities to learn and grow while working on cutting-edge technologies. We are committed to driving cross-company change to disrupt the entire industry. Ready to take on this exciting challenge? Apply now! Date Posted 16-Jul-2025 Closing Date 30-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 1 week ago
7.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Python AWS Engineer GCL: D1 Introduction to role This is an outstanding opportunity for a senior engineer to advance modern software development practices within our team (DevOps/CI/CD/automated testing), building a bespoke integrated software framework (on-premise/cloud/COTS) which will accelerate the ability of AZ scientists to develop new drug candidates for unmet patient needs. To achieve this goal, we need a strong senior individual to work with teams of engineers, as well as engage and influence other global teams within Solution Delivery to ensure that our priorities are aligned with the needs of our science. The successful candidate will be a hands-on coder, passionate about software development and also willing to coach and enable wider teams to grow and expand their software delivery capabilities and skills. Accountabilities The role will encompass a variety of approaches with the aim of simplifying and streamlining scientific workflows, data, and applications, while advancing the use of AI and automation for use by scientists. Working alongside platform lead, architect, BA, and informaticians you will be working to understand, devise technical solutions, estimate and deliver and run operationally sustainable platform software. You need to use your technical acumen to determine an optimal balance between COTS and home-grown solutions and own their lifecycles and roadmap. Our delivery teams are distributed across multiple locations and as Senior Engineer you will need to coordinate activities of technical internal and contract employees. You must be capable of working with others, driving ownership of solutions, showing humility while striving to enable the development of platform technical team members in our journey. You will raise expectations within the whole team, solve complex technical problems and work alongside complementary delivery platforms while aligning solutions with scientific and data strategies and target architecture. Essential Skills/Experience 7 -10 years of experience in working with Python. Proven experience with Python for data manipulation and analysis. Strong proficiency in SQL and experience with relational databases. In-depth knowledge and hands-on experience with various AWS services (S3, Glue, VPC, Lambda Functions, Batch, Step Functions, ECS). Familiarity with Infrastructure as Code (IaC) tools such as AWS CDK and CloudFormation. Experience with Snowflake or other data warehousing solutions. Knowledge of CI/CD processes and tools, specifically Jenkins and Docker. Experience with big data technologies such as Apache Spark or Hadoop is a plus. Strong analytical and problem-solving skills, with the ability to work independently and as part of a team. Excellent communication skills and ability to collaborate with cross-functional teams. Familiarity with data governance and compliance standards. Experience with process tools like JIRA, Confluence Experience of building unit tests, integration tests, system tests and acceptance tests Good team player, and the attitude to work with the highest integrity. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, our work has a direct impact on patients by transforming our ability to develop life-changing medicines. We empower the business to perform at its peak by combining cutting-edge science with leading digital technology platforms and data. Join us at a crucial stage of our journey in becoming a digital and data-led enterprise. Here you can innovate, take ownership, explore new solutions, experiment with leading-edge technology, and tackle challenges in a modern technology environment. Ready to make an impact? Apply now! Date Posted 14-Jul-2025 Closing Date AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 1 week ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Senior Bioinformatician GCL: D2 Introduction to role Are you ready to tackle some of the most challenging informatics problems in the drug discovery clinical trial phase? Join us as a Senior Bioinformatician and be part of a team that is redefining healthcare. Your work will directly impact millions of patients by advancing the standard of drug discovery through data processing, analysis, and algorithm development. Collaborate with informaticians, data scientists, and engineers to deliver ground breaking solutions that drive scientific insights and improve the quality of candidate drugs. Are you up for the challenge? Accountabilities Collaborate with scientific colleagues across AstraZeneca to ensure informatics and advanced analytics solutions meet R&D needs. Develop and deliver informatics solutions using agile methodologies, including pipelining approaches and algorithm development. Contribute to multi-omics drug projects with downstream analysis and data analytics. Create, benchmark, and deploy scalable data workflows for genome assembly, variant calling, annotation, and more. Implement CI/CD practices for pipeline development across cloud-based and HPC environments. Apply cloud computing platforms like AWS for pipeline execution and data storage. Explore opportunities to apply AI & ML in informatics. Engage with external peers and software providers to apply the latest methods to business problems. Work closely with data scientists and platform teams to deliver scientific insights. Collaborate with informatics colleagues in our Global Innovation and Technology Centre. Essential Skills/Experience Masters/PhD (or equivalent) in Bioinformatics, Computational Biology, AI/ML, Genomics, Systems Biology, Biomedical Informatics, or related field with a demonstrable record of informatics and Image analysis delivery in a biopharmaceutical setting. Strong coding and software engineering skills such as Python, R, Scripting, Nextflow. Over 6 years of experience in Image analysis/bioinformatics, with a focus on Image/NGS data analysis and Nextflow (DSL2) pipeline development. Proficiency in cloud platforms preferably AWS (e.g. S3, EC2, Batch, EBS, EFS etc) and containerization tools (Docker, Singularity). Experience with workflow management tools and CI/CD practices in Image analysis and bioinformatics (Git, GitHub, GitLab), HPC in AWS. Experience in working with any multi-omics analysis (Transcriptomics, single cell and CRISPR etc ) or Image data (DICOM, WSI etc) analysis. Experience working with any Omics tools and databases such as NCBI, PubMED, UCSC genome databrowser, bedtools, samtools, Picard or imaging relevant tools such as CellProfiler, HALO, VisioPharm particularly in digital pathology and biomarker research. Strong communication skills, with the ability to collaborate effectively with team members and partners to achieve objectives. Desirable Skills/Experience Experience in Omics or Imaging data analysis in a Biopharmaceutical setting. Knowledge of Dockers, Kubernetes for container orchestration. Experience with other workflow management systems, such as (e.g. Apache Airflow, Nextflow, Cromwell, AWS StepFunctions). Familiarity with web-based bioinformatics tools (e.g., RShiny, Jupyter). Experience with working in GxP-validated environments. Experience administering and optimising a HPC job scheduler (e.g. SLURM). Experience with configuration automation and infrastructure as code (e.g. Ansible, Hashicorp Terraform, AWS CloudFormation, Amazon Cloud Developer Kit). Experience deploying infrastructure and code to public cloud, especially AWS. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we are driven by a shared purpose to push the boundaries of science and develop life-changing medicines. Our innovative approach combines ground breaking science with leading digital technology platforms to empower our teams to perform at their best. We foster an environment where you can explore new solutions and experiment with groundbreaking technology. With countless opportunities for learning and growth, you'll be part of a diverse team that works multi-functionally to make a meaningful impact on patients' lives. Ready to make a difference? Apply now to join our team as a Senior Bioinformatician! Date Posted 02-Jul-2025 Closing Date 30-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Come work at a place where innovation and teamwork come together to support the most exciting missions in the world! Description: We are seeking a talented Lead Software Engineer – Performance to deliver roadmap features of Enterprise TruRisk Platform which would help customers to Measure, Communicate and Eliminate Cyber Risks. You will lead the performance engineering efforts across Spark, Kafka, Elasticsearch, and Middleware APIs, ensuring that our real-time data pipelines and services meet enterprise-grade SLAs. As part of our high-performing engineering team, you will design and execute performance testing strategies, identify system bottlenecks, and work with development teams to implement performance improvements that support billions of cyber security events processing a day across our data platform. Responsibilities: Own the performance strategy across distributed systems which includes Hadoop, Spark, Kafka, Elasticsearch/OpenSearch, Big Data Components and APIs for each release. Define, develop, and execute performance test plans, load tests, stress tests, and soak tests. Create realistic performance test scenarios for data pipelines and microservices based on production-like workloads. Proactively identify bottlenecks, resource contention, and latency issues using tools such as JMeter, Spark UI, Kafka Manager, Elastic Monitoring and App Dynamics. Provide deep-dive analysis and recommendations on tuning and scaling Spark jobs, Kafka topics/partitions, ES queries, and API endpoints. Collaborate with developers, architects, and infrastructure teams to integrate performance feedback into design and implementation. Simulate and benchmark real-time and batch data flow at scale using synthetic and production-like datasets and own this framework end to end for synthetic data generator. Lead the initiative to build a performance testing framework that integrates with CI/CD pipelines. Establish and track SLAs for throughput, latency, CPU/memory utilization and Garbage collection. Create performance dashboards and visualization using Prometheus/Grafana, Kibana, or equivalent. Document performance test findings and create technical reports for leadership and engineering teams. Recommend performance optimization to Dev and Platform groups. Responsible for optimizing the overall cost. Contribute to feature development and fixes apart from performance benchmarking. Qualifications: Bachelor's degree in computer science, Engineering, or related field. 8+ years of overall experience in distributed systems and backend performance engineering. 4+ years of JAVA development experience with Microservices architecture. Proficient in scripting (Python, Bash) for automation and test data generation. 4+ years of hands-on experience with Apache Spark – performance tuning, memory management, and DAG optimization. 3+ years of experience with Kafka – topic optimization, producer/consumer tuning, and lag monitoring. 3+ years of experience with Elasticsearch/OpenSearch – query profiling, indexing strategies, and cluster optimization. 3+ years of experience with performance testing tools such as JMeter or similar. Excellent programming and designing skills and Hands-on experience on Spring, Hibernate. Deep understanding of middleware and microservices performance including REST APIs. Strong knowledge of profiling, debugging, and observability tools (e.g., Spark UI, Athena, Grafana, ELK). Experience designing and running benchmarks at scale for high-throughput environments in PBs. Experience with containerized workloads and performance testing in Kubernetes/Docker environments. Solid understanding of cloud-native architecture (OCI) and distributed systems design. Strong knowledge of Linux operating systems and performance related improvements. Familiarity with CI/CD integration for performance testing (e.g., Jenkins, GitHub). Knowledge of data lake architecture, caching solutions, and message queues. Strong communication skills and experience influencing cross-functional engineering teams. Additional Plus Competencies: Prior experience in any analytics platform on Big Data would be a huge plus.
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Invenio The largest independent global SAP solutions provider serving the public sector as well as offering specialist skills in media and entertainment. We bring deep expertise combined with advanced technologies to enable organizations to modernize so they can run at the speed of today’s business. We know how to navigate the extraordinary complexities of international businesses and public sector organizations, working with stakeholders to drive change and create agile organizations of tomorrow using the technologies of today. Learn more at www.invenio-solutions.com. . Role - Angular Senior Consultant Skills : 4 to 6 years of experience in Angular (v8.0 and above) including typescript. Hands on experience on Rest API. Hands on experience on material design. Hands on experience on bootstrap. Experience/Knowledge on JSON. Should be able to convert templates to screens. Must have knowledge in code version mechanism using tools like TeamCity, GIT hub. Should have Junit/MUnit test case development experience. Should have working experience on ticketing tool like JIRA/ServiceNow. Should have knowledge about Apache Tomcat server. Basic Java skills, CSS, HTML. Experience or Knowledge on any database like MySql, MS-SQL, Oracle etc. Knowledge on Agile scrum and water fall methodology. Responsibilities: Participate in Client Workshop UI design, Coding, Unit Test, Configuration, Testing, Integration. Technical Design documentation and Deployment QA/SIT/UAT support. Work on Change Request development. Work on PRODUCTION fixes Collaborate with distributed team. Quality/ Standard focus. Work towards Goal of "Becoming One of the Best Consulting Companies". Focused on specific engagement. Business Skills Excellent oral and written communication skills, the ability to clearly and concisely communicate with others. Experience with Microsoft Office suite including Word, Excel, PowerPoint, and Visio. Understanding of business processes for focus areas or multiple modules. Ability to do research and perform detailed tasks. Strong analytical skills. Consulting Skills Aptitude for working in a team environment; problem-solving skills, creative thinking, communicating clearly and empathetically, strong time management, and ability to collaborate with all levels of staff. Ability to explain ideas and concepts to other project team members and to client personnel. Has a solid foundation for consulting “soft” skills necessary for client engagements and may act as a coach for others related to these soft skills. Ability to interpret requirements and apply SAP best practices. Ability to identify upsell opportunities and assist in the management of scope. Creates professional relationships with client Develop new professional peer relationships for additional business or possible new consultants Helps develop overall marketing messages Communicates project resource requirements to staffing coordinator/clients Ensures quality implementation (works with QA program) May participate in Pre-Sales as part of the client pursuit team Leadership Skills Seeks ways to increase the project team effectiveness Acts as a mentor to Consultants and Sr. Consultants Works well as a member of a team Seeks ways to increase their level of contribution and therefore team effectiveness Personnel Development Development of consultants to meet your project’s requirements Maintains knowledge of focus area at an expert level (known as the consultant’s consultant) Give effective feedback (Immediate and Evaluations) General Skills/Tasks Evaluate and design application and/or technical architectures Leads team effort in developing solutions for projects Completes assignments within budget, meets project deadlines, makes and keeps sensible commitments to client and team Meets billing efficiency targets, and complies with all administrative responsibilities in a timely and effective manner Keeps project management appraised of project direction and client concerns Understands the client’s business and technical environments Regularly prepares status reports Effectively manage a single engagement on a detailed level Define project scope Direct team efforts in developing solutions for mission-critical client needs Manage the team responsible for the daily activities of assigned projects Ensure project quality, satisfaction, and profitability Perform personnel performance evaluations Provide personnel performance, development, and education plans Refer to the Performance Plan and Job Description documents for additional responsibilities of this position Invenio is an Equal Opportunity Employer that does not discriminate on the basis of actual or perceived race, color, creed, religion, national origin, ancestry, citizenship status, age, sex or gender (including pregnancy, childbirth and pregnancy-related conditions), gender identity or expression (including transgender status), sexual orientation, marital status, military service and veteran status, disability, genetic information, or any other characteristic protected by applicable federal, state or local laws and ordinances. Invenio’s management team is dedicated to this policy with respect to recruitment, hiring, placement, promotion, transfer, training, compensation, benefits, employee activities, access to facilities and programs and general treatment during employment.
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Java Developer – 4 Years Experience Location: Gurugram (Onsite) Job Type: Full-Time Experience Required: 4+ Years Project Type: Migration & Enterprise-Level Projects Job Summary: We are seeking a skilled Java Developer with 4+ years of hands-on experience in enterprise-level applications and migration projects . The ideal candidate should be strong in Kafka , multi-threading , microservices , SQL , and core Java coding principles. You will work in a fast-paced Agile environment focused on designing scalable systems and supporting complex business processes. Key Responsibilities: Develop and maintain scalable Java-based microservices . Build and integrate robust Kafka-based messaging solutions. Write clean, efficient, and testable multi-threaded code for high-performance applications. Collaborate with cross-functional teams to support migration initiatives . Optimize SQL queries and interact with relational databases effectively. Participate in code reviews , technical discussions, and performance tuning. Deliver high-quality code aligned with enterprise standards and best practices . Mandatory Skills: Strong Core Java (OOPs, Collections, Exception Handling) Apache Kafka (Producer, Consumer, Streams, Topics) Multi-threading & Concurrency Microservices Architecture (Spring Boot preferred) SQL (Joins, Indexing, Stored Procedures, Performance Tuning) Strong debugging and problem-solving skills Good to Have: Experience with Spring Cloud , Docker , or Kubernetes Familiarity with CI/CD pipelines Exposure to cloud platforms (AWS/Azure/GCP) Knowledge of JIRA , Git , and Agile methodologies
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visit www.blend360.com Job Description We are seeking a seasoned Data Engineering Manager with 8+ years of experience to lead and grow our data engineering capabilities. This role demands strong hands-on expertise in Python, SQL, Spark , and advanced proficiency in AWS and Databricks . As a technical leader, you will be responsible for architecting and optimizing scalable data solutions that enable analytics, data science, and business intelligence across the organization. Key Responsibilities Lead the design, development, and optimization of scalable and secure data pipelines using AWS services such as Glue, S3, Lambda, EMR, and Databricks Notebooks, Jobs, and Workflows. Oversee the development and maintenance of data lakes on AWS Databricks, ensuring performance and scalability. Build and manage robust ETL/ELT workflows using Python and SQL, handling both structured and semi-structured data. Implement distributed data processing solutions using Apache Spark/PySpark for large-scale data transformation. Collaborate with cross-functional teams including data scientists, analysts, and product managers to ensure data is accurate, accessible, and well-structured. Enforce best practices for data quality, governance, security, and compliance across the entire data ecosystem. Monitor system performance, troubleshoot issues, and drive continuous improvements in data infrastructure. Conduct code reviews, define coding standards, and promote engineering excellence across the team. Mentor and guide junior data engineers, fostering a culture of technical growth and innovation. Qualifications Requirements 8+ years of experience in data engineering with proven leadership in managing data projects and teams. Expertise in Python, SQL, Spark (PySpark), and experience with AWS and Databricks in production environments. Strong understanding of modern data architecture, distributed systems, and cloud-native solutions. Excellent problem-solving, communication, and collaboration skills. Prior experience mentoring team members and contributing to strategic technical decisions is highly desirable.
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Overview TekWissen is a global workforce management provider that offers strategic talent solutions to our clients throughout India and world-wide. Our client is a company operating a marketplace for consumers, sellers, and content creators. It offers merchandise and content purchased for resale from vendors and those offered by thirdparty sellers. Job Title: Business Intelligence Engineer III Location: Pune Duration: 6 Months Job Type: Contract Work Type: Onsite Job Description The Top Responsibilities: Data Engineering on AWS: Design and implement scalable and secure data pipelines using AWS services such as the client's S3, AWS Glue, the client's Redshift, and the client's Athena. Ensure high-performance, reliable, and fault-tolerant data architectures. Data Modeling and Transformation: Develop and optimize dimensional data models to support various business intelligence and analytics use cases. Perform complex data transformations and enrichment using tools like AWS Glue, AWS Lambda, and Apache Spark. Business Intelligence and Reporting: Collaborate with stakeholders to understand reporting and analytics requirements. Build interactive dashboards and reports using visualization tools like the client's QuickSight. Data Governance and Quality: Implement data quality checks and monitoring processes to ensure the integrity and reliability of data. Define and enforce data policies, standards, and procedures. Cloud Infrastructure Management: Manage and maintain the AWS infrastructure required for the data and analytics platform. Optimize performance, cost, and security of the underlying cloud resources. Collaboration and Knowledge Sharing: Work closely with cross-functional teams, including data analysts, data scientists, and business users, to identify opportunities for data-driven insights. Share knowledge, best practices, and train other team members. Leadership Principles Ownership Deliver result Insist on the Highest Standards Mandatory Requirements 3+ years of experience as a Business Intelligence Engineer or Data Engineer, with a strong focus on AWS cloud technologies. Proficient in designing and implementing data pipelines using AWS services such as S3, Glue, Redshift, Athena, and Lambda. Expertise in data modeling, dimensional modeling, and data transformation techniques. Experience in building and deploying business intelligence solutions, including the use of tools like the client's QuickSight and Tableau. Strong SQL and Python programming skills for data processing and analysis. Understanding of cloud architecture patterns, security best practices, and cost optimization on AWS. Excellent communication and collaboration skills to work effectively with cross-functional teams. Preferred Skills Hands-on experience with Apache Spark, Airflow, or other big data technologies. Knowledge of AWS DevOps practices and tools, such as AWS CodePipeline, AWS CodeBuild, and AWS CloudFormation. Familiarity with agile software development methodologies. AWS Certification (e.g., AWS Certified Data Analytics - Specialty). Certification Requirements Any Graduate TekWissen® Group is an equal opportunity employer supporting workforce diversity.
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Purpose Design and develop end-to-end software solutions that power innovative products at Trimble. Leverage your expertise in C#, ASP.NET (Framework/Core), Web API, Angular, and Microsoft Azure services to build scalable and high-performance web applications. This role involves hands-on full-stack development, including responsive front-end UI, robust server-side logic, and secure, cloud-integrated backend services. You will work in an Agile team environment, collaborating with cross-functional teams to deliver impactful digital solutions while maintaining high code quality, performance, and security standards. Primary Responsibilities Understand high-level product and technical requirements and convert them into scalable full-stack software designs. Develop server-side applications using C#, ASP.NET Core/Framework, Web API, and Entity Framework. Build intuitive and responsive front-end interfaces using Angular, JavaScript, HTML, and CSS. Design, develop, and maintain RESTful APIs, including OData APIs, ensuring proper versioning and security. Integrate authentication and authorization mechanisms using industry standards. Work with Microsoft SQL Server for designing schemas, writing queries, and optimizing performance. Build microservices and modular web components adhering to best practices. Develop and deploy Azure Functions, utilize Azure Service Bus, and manage data using Azure Storage. Integrate with messaging systems such as Apache Kafka for distributed event processing. Contribute to CI/CD workflows, manage source control using Git, and participate in code reviews and team development activities. Write and maintain clean, well-documented, and testable code with unit and integration test coverage. Troubleshoot and resolve performance, scalability, and maintainability issues across the stack. Support production deployments and maintain operational excellence for released features. Stay current with evolving technologies and development practices to improve team efficiency and product quality. Skills And Background Strong proficiency in C# and .NET Framework 4.x / .NET Core Solid experience in ASP.NET MVC / ASP.NET Core, Web API, and Entity Framework / EF Core Knowledge of OData APIs, REST principles, and secure web communication practices Front-end development experience using JavaScript, Angular (preferred), HTML5, CSS3 Proficient with Microsoft SQL Server including query tuning, indexing, and stored procedures Experience with Authentication & Authorization (OAuth, JWT, Claims-based Security) Experience building microservices and using Web Services Hands-on with Azure Functions, Azure Service Bus, and Azure Storage Experience integrating and processing messages using Apache Kafka Knowledge of source control systems like Git, and experience in Agile development environments Exposure to unit testing frameworks, integration testing, and DevOps practices Ability to write clean, maintainable, and well-structured code Excellent problem-solving, debugging, and troubleshooting skills Strong communication and collaboration skills Work Experience 5–8 years of experience as a Full Stack Engineer or Software Developer Proven experience delivering scalable web applications and services in a production environment Experience in Agile/Scrum teams and cross-cultural collaboration Tier-1 or Tier-2 product company or equivalent high-performance team experience preferred Minimum Required Qualification Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related discipline from a Tier-1 or Tier-2 institute. Reporting The individual selected for this role will report to a Technical Project Manager, Engineering Manager, Engineering Director, or another designated leader within the division. About Trimble Dedicated to the world’s tomorrow, Trimble is a technology company delivering solutions that enable our customers to work in new ways to measure, build, grow and move goods for a better quality of life. Core technologies in positioning, modeling, connectivity, and data analytics connect the digital and physical worlds to improve productivity, quality, safety, transparency, and sustainability. From purpose-built products and enterprise lifecycle solutions to industry cloud services, Trimble is transforming critical industries such as construction, geospatial, agriculture, and transportation to power an interconnected world of work. For more information, visit: www.trimble.com Trimble’s Inclusiveness Commitment We believe in celebrating our differences. That is why our diversity is our strength. To us, that means actively participating in opportunities to be inclusive. Diversity, Equity, and Inclusion have guided our current success while also moving our desire to improve. We actively seek to add members to our community who represent our customers and the places we live and work. We have programs in place to ensure our people are seen, heard, and welcomed—and most importantly, that they know they belong, no matter who they are or where they come from.
Posted 1 week ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Purpose As a Lead Software Development Engineer in Test (SDET) on the Viewpoint team at Trimble , you will lead the test automation strategy, execution, and process optimization for large-scale web and mobile applications. In this role, you will mentor junior SDETs, work closely with development and product teams, and ensure quality through continuous testing and automation best practices. You will be accountable for driving test automation across platforms (web, iOS, Android), defining scalable frameworks, and establishing CI/CD-integrated quality gates. Your contribution will be critical to ensuring smooth, high-quality releases for Trimble Viewpoint’s mission-critical enterprise software used in the global construction industry. What You Will Do Define, implement, and evolve the overall test automation strategy for the Viewpoint product suite Build and maintain scalable, reusable test automation frameworks using C# for web and Appium/Selenium for mobile (iOS/Android) Provide technical leadership to the SDET team, including reviewing test architecture, test cases, and automation code Champion quality-first principles across Agile teams and guide integration of testing into all stages of the development lifecycle Set up and manage cloud-based testing infrastructure using Sauce Labs, emulators/simulators, and physical devices Develop test strategies for API, functional, regression, performance, and cross-platform compatibility testing Lead root cause analysis of complex issues in coordination with development and QA teams Drive continuous improvements in test coverage, speed, and reliability across mobile and web Design dashboards and metrics to track test effectiveness, code coverage, and defect trends Collaborate with product managers, architects, and engineering leaders to align quality initiatives with business goals Help integrate test automation into CI/CD pipelines and maintain quality gates for every release Evaluate and recommend new tools, frameworks, and processes to improve automation and testing workflows Mentor junior SDETs and foster a high-performance quality culture within the engineering team What Skills & Experience You Should Have Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related technical field 6+ years of experience in software testing or SDET roles with at least 2+ years in a lead or senior QA/SDET capacity Advanced proficiency in test automation using C#, including frameworks like MSTest, NUnit, or xUnit Strong hands-on experience with Selenium, Appium, and mobile automation testing for iOS and Android Experience with Sauce Labs or similar device farms/cloud-based testing platforms Expertise in functional, regression, API, and performance testing Solid experience working in Agile teams, participating in sprint planning, estimations, and retrospectives Deep understanding of CI/CD pipelines, including integration of automated tests in build and deployment flows Prior experience with defect tracking systems (JIRA) and test case management tools (e.g., TestRail, Zephyr) Familiarity with testing RESTful services, backend workflows, and microservice architectures Excellent problem-solving skills, with a mindset for root-cause analysis and continuous improvement Strong verbal and written communication skills with the ability to influence stakeholders and drive quality initiatives Viewpoint – Engineering Context You Will Be Part Of The Trimble Viewpoint Team Building Enterprise Software Solutions For Construction Management. Viewpoint’s Technology Stack Includes C#, ASP.NET (Core/Framework), Web API, Angular, OData, and Microsoft SQL Server Integration with Azure Functions, Azure Service Bus, Azure Storage, and Apache Kafka RESTful services, Microservices, and modern frontend technologies Enterprise-grade CI/CD pipelines and Agile workflows You’ll work alongside experienced full-stack engineers, product managers, and other QA professionals to deliver production-grade releases at scale. Reporting Structure This position reports to a Technical Project Manager or Engineering Manager within the Viewpoint organization. About Trimble Trimble is a technology company transforming the way the world works by delivering solutions that connect the physical and digital worlds. Core technologies in positioning, modeling, connectivity, and data analytics improve productivity, quality, safety, and sustainability across industries like construction, agriculture, transportation, and geospatial. Visit www.trimble.com to learn more. Trimble’s Inclusiveness Commitment We believe in celebrating our differences. Our diversity is our strength. We strive to build an inclusive workplace where everyone belongs and can thrive. Programs and practices at Trimble ensure individuals are seen, heard, welcomed—and most importantly—valued.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15459 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France