Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Chennai
On-site
Join us as we work to create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. The Team Stability Metrics and Toolkit Engineering is an infrastructure operations and development team that helps to fill many of the grey areas between our stakeholders. Our team is responsible for developing tools to facilitate visibility into our availability and related metrics. We strive to deliver impactful, accurate, and valuable tools, monitoring, internal analytics services, and products, and serve as a front-line initial touchpoint for live Production incident triage, analysis, and remediation. The ideal candidate will utilize their engineering expertise to help build solutions to novel problems in software development, both front-end and back-end; data engineering; and anomaly detection. Job Responsibilities Research, design, and implement operational monitoring instruments and enhancements including early failure detection using machine learning metrics Perform routine operations to migrate data in order to distribute usage of our resources more evenly across clients and infrastructure Develop tools and web applications to support business new and existing business functions Analyze application logs and metrics to determine service availability and uptime while developing automation Interface with stakeholders to keep them informed of availability trends associated with business critical functions Maintain existing synthetic monitoring to ensure parity as new features are developed Typical Qualifications: Strong critical thinking and analytical skills In-depth experience in at least one programming language commonly used for analysis, data engineering, or infrastructure automation tasks (Python, JavaScript, Perl, R, Java/Scala, C/C++, etc.) Willingness to build with new/unfamiliar languages with guidance, as appropriate for each individual project/implementation Ability to be flexible and to work independently as business and environment demands evolve Experience with application performance measurement and troubleshooting across the application stack and its environments (Windows, Linux, Oracle, AWS, etc.) Working familiarity with a variety of backends, including other REST endpoints, databases, etc. About athenahealth Our vision: In an industry that becomes more complex by the day, we stand for simplicity. We offer IT solutions and expert services that eliminate the daily hurdles preventing healthcare providers from focusing entirely on their patients — powered by our vision to create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. Our company culture: Our talented employees — or athenistas, as we call ourselves — spark the innovation and passion needed to accomplish our vision. We are a diverse group of dreamers and do-ers with unique knowledge, expertise, backgrounds, and perspectives. We unite as mission-driven problem-solvers with a deep desire to achieve our vision and make our time here count. Our award-winning culture is built around shared values of inclusiveness, accountability, and support. Our DEI commitment: Our vision of accessible, high-quality, and sustainable healthcare for all requires addressing the inequities that stand in the way. That's one reason we prioritize diversity, equity, and inclusion in every aspect of our business, from attracting and sustaining a diverse workforce to maintaining an inclusive environment for athenistas, our partners, customers and the communities where we work and serve. What we can do for you: Along with health and financial benefits, athenistas enjoy perks specific to each location, including commuter support, employee assistance programs, tuition assistance, employee resource groups, and collaborative workspaces — some offices even welcome dogs. We also encourage a better work-life balance for athenistas with our flexibility. While we know in-office collaboration is critical to our vision, we recognize that not all work needs to be done within an office environment, full-time. With consistent communication and digital collaboration tools, athenahealth enables employees to find a balance that feels fulfilling and productive for each individual situation. In addition to our traditional benefits and perks, we sponsor events throughout the year, including book clubs, external speakers, and hackathons. We provide athenistas with a company culture based on learning, the support of an engaged team, and an inclusive environment where all employees are valued. Learn more about our culture and benefits here: athenahealth.com/careers https://www.athenahealth.com/careers/equal-opportunity
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Primary Skills : Pyspark, Spark and proficient in SQL Secondary Skills : Scala and Python Experience : 3 + Yrs Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education At least 5 years of experience in Pyspark, Spark with Hadoop distributed frameworks while handling large amount of big data using Spark and Hadoop Ecosystems in Data Pipeline creation , deployment , Maintenance and debugging Experience in scheduling and monitoring Jobs and creating tools for automation At least 4 years of experience with Scala and Python required. Proficient knowledge of SQL with any RDBMS. Strong communication skills (verbal and written) with ability to communicate across teams, internal and external at all levels. Ability to work within deadlines and effectively prioritize and execute on tasks. Preferred Qualifications: At least 1 years of AWS development experience is preferred Experience in Drive automations DevOps Knowledge is an added advantage. Advanced conceptual understanding of at least one Programming Language Advanced conceptual understanding of one database and one Operating System Understanding of Software Engineering with practice in at least one project Ability to contribute in medium to complex tasks independently Exposure to Design Principles and ability to understand Design Specifications independently Ability to run Test Cases and scenarios as per the plan Ability to accept and respond to production issues and coordinate with stake holders Good understanding of SDLC Analytical abilities Logical thinking Awareness of latest technologies and trends
Posted 1 week ago
8.0 years
2 - 8 Lacs
Noida
On-site
Company Overview: - R1 is a leading provider of technology-driven solutions that help hospitals and health systems to manage their financial systems and improve patients’ experience. We are the one company that combines the deep expertise of a global workforce of revenue cycle professionals with the industry's most advanced technology platform, encompassing sophisticated analytics, Al, intelligent automation and workflow orchestration. R1 is a place where we think boldly to create opportunities for everyone to innovate and grow. A place where we partner with purpose through transparency and inclusion. We are a global community of engineers, front-line associates, healthcare operators, and RCM experts that work together to go beyond for all those we serve. Because we know that all this adds up to something more, a place where we're all together better. R1 India is proud to be recognized amongst Top 25 Best Companies to Work For 2024, by the Great Place to Work Institute. This is our second consecutive recognition on this prestigious Best Workplaces list, building on the Top 50 recognition we achieved in 2023. Our focus on employee wellbeing and inclusion and diversity is demonstrated through prestigious recognitions with R1 India being ranked amongst Best in Healthcare, Top 100 Best Companies for Women by Avtar & Seramount, and amongst Top 10 Best Workplaces in Health & Wellness. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to ‘make healthcare work better for all’ by enabling efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 16,000+ strong in India with presence in Delhi NCR, Hyderabad, Bangalore, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. R1 RCM Inc. is a leading provider of technology-enabled revenue cycle management services which transform and solve challenges across health systems, hospitals and physician practices. Headquartered in Chicago, R1® is a publicly-traded organization with employees throughout the US and multiple INDIA locations. Our mission is to be the one trusted partner to manage revenue, so providers and patients can focus on what matters most. Our priority is to always do what is best for our clients, patients and each other. With our proven and scalable operating model, we complement a healthcare organization’s infrastructure, quickly driving sustainable improvements to net patient revenue and cash flows while reducing operating costs and enhancing the patient experience. Description: We are seeking a Staff Software Engineer(5I) (ETL) with 8-10 years of experience to join our ETL Development team. This role will report to the Engineering Manager and the candidate will be involved in the planning, design, and implementation of our centralized data warehouse solution for data acquisition, Ingestion and large data processing and automation/optimization across all the company products. About the Role: Candidate will play a crucial role in designing, developing, and leading the implementation of ETL processes and data architecture solutions. Will collaborate with various stakeholders to ensure the seamless integration, transformation, and loading of data to support our data warehousing and analytics initiatives. Key Responsibilities: Lead the design and architecture of ETL processes and data integration solutions. Develop and maintain ETL workflows using tools such as SSIS, Azure Databricks, SparkSQL or similar. Collaborate with data architects, analysts, and business stakeholders to gather requirements and translate them into technical solutions. Optimize ETL processes for performance, scalability, and reliability . Conduct code reviews , provide technical guidance, and mentor junior developers. Troubleshoot and resolve issues related to ETL processes and data integration. Ensure compliance with data governance, security policies, and best practices. Document ETL processes and maintain comprehensive technical documentation. Stay updated with the latest trends and technologies in data integration and ETL. Qualifications: Bachelor’s degree in computer science, Information Technology, or a related field. 10-12 years of experience in ETL development and data integration. Expertise in ETL tools such as SSIS, T-SQL, Azure Databricks or similar. Knowledge of various SQL/NoSQL data storage mechanisms and Big Data technologies and experience in Data Modeling . Knowledge of Azure data factory, Azure Data bricks, Azure Data Lake. Experience in Scala, SparkSQL, Airflow is preferred. Proven experience in data architecture and designing scalable ETL solutions. Excellent problem-solving and analytical skills. Strong communication and leadership skills. Ability to work effectively in a team-oriented environment. Experience working with agile methodology. Healthcare industry experience preferred. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visit: r1rcm.com Visit us on Facebook
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Responsibilities Design, develop, and implement robust Big Data solutions using technologies such as Hadoop, Spark, and NoSQL databases Build and maintain scalable data pipelines for effective data ingestion, transformation, and analysis Collaborate with data scientists, analysts, and cross-functional teams to understand business requirements and translate them into technical solutions Ensure data quality and integrity through effective validation, monitoring, and troubleshooting techniques Optimize data processing workflows for maximum performance and efficiency Stay up-to-date with evolving Big Data technologies and methodologies to enhance existing systems Implement best practices for data governance, security, and compliance Document technical designs, processes, and procedures to support knowledge sharing across teams Requirements Bachelor's or Master's degree in Computer Science, Engineering, or a related field 4+ years of experience as a Big Data Engineer or in a similar role Strong proficiency in Big Data technologies (Hadoop, Spark, Hive, Pig) and frameworks Extensive experience with programming languages such as Python, Scala, or Java Knowledge of data modeling and data warehousing concepts Familiarity with NoSQL databases like Cassandra or MongoDB Proficient in SQL for data querying and analysis Strong analytical and problem-solving skills Excellent communication and collaboration abilities Ability to work independently and effectively in a fast-paced environment Benefits Competitive salary and benefits package. Opportunity to work on cutting-edge technologies and solve complex challenges. Dynamic and collaborative work environment with opportunities for growth and career advancement. Regular training and professional development opportunities.
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Responsibilities Provide technical support and troubleshooting for Big Data applications and systems built on the Hadoop ecosystem Monitor system performance, analyze logs, and identify potential issues before they impact services Collaborate with engineering teams to deploy and configure Hadoop clusters and related components Assist in maintenance and upgrades of Hadoop environments to ensure optimum performance and security Develop and maintain documentation for processes, procedures, and system configurations Implement data backup and recovery procedures to ensure data integrity and availability Participate in on-call rotations to provide after-hours support as needed Stay up to date with Hadoop technologies and support methodologies Assist in the training and onboarding of new team members and users on Hadoop best practices Requirements Bachelor's degree in Computer Science, Information Technology, or a related field 3+ years of experience in Big Data support or system administration, specifically with the Hadoop ecosystem Strong understanding of Hadoop components (HDFS, MapReduce, Hive, Pig, etc.) Experience with system monitoring and diagnostics tools Proficient in Linux/Unix commands and scripting languages (Bash, Python) Basic understanding of database technologies and data warehousing concepts Strong problem-solving skills and ability to work under pressure Excellent communication and interpersonal skills Ability to work independently as well as collaboratively in a team environment Willingness to learn new technologies and enhance skills Skills: Hadoop, spark/scala, HDFS, SQL, Unix Scripting, Data Backup, System Monitoring Benefits Competitive salary and benefits package. Opportunity to work on cutting-edge technologies and solve complex challenges. Dynamic and collaborative work environment with opportunities for growth and career advancement. Regular training and professional development opportunities.
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the role: The magic at Magicpin is powered using cutting edge technology built by some of the smartest minds in the industry. We are solving some of the most complex and challenging engineering problems in search & discovery,data science & analytics at a significant scale and we are growing fast. Our tech stack includes micro services in Go, Java, Python built on K8s using multiple relational/no-sql and asynchronous communication and public cloud technologies. If you are an engineer with serious tech chops and love solving high-scale, complex problems in the e- commercespace, we have many challenges for you. What will you be doing? Take ownership and deliver solutions that can work at scale with close to zero defects Produce high-quality code, unit test cases, and deployment scripts Desire to learn and dive deep into new technologies on the job, especially around modern data storage and streaming open-source systems (BigTable, Kafka, Solr, Elastic) Experience building high throughput/low latency systems. Build a solid understanding of CS fundamentals-Operating Systems, Databases, and Data Structure. Working very closely with the senior members of the team to come up with better solutions Bias to action, a hacker's mindset - finding ways to crack the problem at hand, not resting till itis Qualifications B.E./B.Tech in Computer science or equivalent degree with 2+ Yrs work experience Proficient in at least one or more programming languages including but not limited to Java/Scala, Python, or C++ Excellent problem-solving skills Solid engineering principles and a clear understanding of data structures and algorithms. In-depth knowledge of RDBMS, NoSQL, and Applications servers Familiarity with cloud platforms (AWS, GCP, or Azure)
Posted 1 week ago
0 years
84 - 96 Lacs
India
On-site
Experience in building Order and Execution Management, Trading systems is required Financial experience and exposure to Trading In depth understanding of concurrent programming and experience in designing high throughput, high availability, fault tolerant distributed applications is required. Must possess experience in developing and deploying applications within an enterprise environment that follows a well-established software development lifecycle (SDLC). MySQL experience is mandatory for this position Experience in building distributed applications using NoSQL technologies like Cassandra, coordination services like Zookeeper, and caching technologies like Apache Ignite and Redis strongly preferred Experience in building micro services architecture / SOA is required. Experience in message-oriented streaming middleware architecture is preferred (Kafka, MQ, NATS, AMPS) Experience with orchestration, containerization, and building cloud native applications (AWS, Azure) is a plus Experience with modern web technology such as Angular, React, TypeScript a plus Strong analytical and software architecture design skills with an emphasis on test driven development. Experience in programming languages such as Scala, python would be a plus. Experience in using Project Management methodologies such as Agile/Scrum Effective communication and presentation skills (written and verbal) are required Bachelor’s or master’s degree in computer science or engineering Good Communication skills #CoreJava (Must) #Multithreading ( Must ) #Spring #Kafka or any message que #Microservices ( Must ) #Coding #MySQL (Must) #SDLC Job Types: Full-time, Permanent Pay: ₹700,000.00 - ₹800,000.00 per month Work Location: In person
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers to levels they cannot achieve anywhere else. This is a world of more possibilities, more innovation, more openness in a cloud-enabled world. The Business & Industry Copilots group is a rapidly growing organization that is responsible for the Microsoft Dynamics 365 suite of products, Power Apps, Power Automate, Dataverse, AI Builder, Microsoft Industry Solution and more. Microsoft is considered one of the leaders in Software as a Service in the world of business applications and this organization is at the heart of how business applications are designed and delivered. This is an exciting time to join our group BIC Customer Experience and work on something highly strategic to Microsoft. The goal of the Customer Zero Engineering is to build the next generation of our applications running on Dynamics 365, AI, Copilot, and several other Microsoft cloud services to drive AI transformation across Marketing, Sales, Services and Support organizations within Microsoft. We innovate quickly and collaborate closely with our partners and customers in an agile, high-energy environment. Leveraging the scalability and value from Azure & Power Platform, we ensure our solutions are robust and efficient. Our organization’s implementation acts as reference architecture for large companies and helps drive product capabilities. If the opportunity to collaborate with a diverse engineering team, on enabling end-to-end business scenarios using cutting-edge technologies and to solve challenging problems for large scale 24x7 business SaaS applications excite you, please come and talk to us! We are looking for talented and motivated data engineers interested in helping our organization empower learners through producing valuable data that can be used to understand the organization's needs to make the right decisions. We want you for your passion for technology, your curiosity and willingness to learn, your ability to communicate well in a team environment, your desire to make our team better with your contributions, and your ability to deliver. We use industry-standard technology: C#, JavaScript/Typescript, HTML5, ETL/ELT, Data warehousing, and/ or Business Intelligence Development. Responsibilities Implement scalable data models, data pipelines, data storage, management, and transformation solutions for real-time decisioning, reporting, data collecting, and related functions. Leveraging machine learning(ML) models knowledge and implement appropriate solutions for business objectives. Ship high-quality, well-tested, secure, and maintainable code. Develop and maintain software designed to improve data governance and security. Troubleshoot and resolve issues related to data processing and storage. Collaborate effectively with teammates, other teams and disciplines and drive improvements in engineering. Creates and implements code for a product, service, or feature, reusing code as applicable. Contributes to efforts to break down larger work items into smaller work items and provides estimation. Troubleshooting live site issues as part of both product development and Designated Responsible Individual (DRI) during live site rotations. Remains current in skills by investing time and effort into staying abreast of latest technologies. Qualifications Required: Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. 3+ years experience in business analytics, software development, data modeling or data engineering work. Software development using languages like C#, JavaScript or Java. Experience using a variety of data stores, including data warehouses, RDBMS, in-memory caches, and document Databases. Proficiency with SQL and NoSQL and hands-on experience using distributed computing platforms. Experience developing on cloud platforms (i.e. Azure, AWS) in a continuous delivery environment. Strong problem solving, design, implementation, and communication skills. Strong intellectual curiosity and passion for learning new technologies. Preferred Qualifications Experience with data engineering projects with firm sense of accountability and ownership. Experience in ETL/ELT, Data warehousing, data pipelines and/ or Business Intelligence Development. Experience using ML, anomaly detection, predictive analysis, exploratory data analysis. A strong understanding of the value of Data, data exploration and the benefits of a data-driven organizational culture. Business Intelligence experience or visualization with tools such as Power BI is also beneficial. Experience implementing data systems in C#/Python/Scala or similar. Working knowledge of any (or multiple) of the following tech stacks is a plus: SQL, Databricks, PySparkSQL, Azure Synapse, Azure Data Factory, Azure Fabric, or similar. Basic Knowledge of Microsoft Dynamics Platform will be an added advantage. #BICJobs Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 week ago
4.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Responsibilities Design, develop, and implement robust Big Data solutions using technologies such as Hadoop, Spark, and NoSQL databases Build and maintain scalable data pipelines for effective data ingestion, transformation, and analysis Collaborate with data scientists, analysts, and cross-functional teams to understand business requirements and translate them into technical solutions Ensure data quality and integrity through effective validation, monitoring, and troubleshooting techniques Optimize data processing workflows for maximum performance and efficiency Stay up-to-date with evolving Big Data technologies and methodologies to enhance existing systems Implement best practices for data governance, security, and compliance Document technical designs, processes, and procedures to support knowledge sharing across teams Requirements Bachelor's or Master's degree in Computer Science, Engineering, or a related field 4+ years of experience as a Big Data Engineer or in a similar role Strong proficiency in Big Data technologies (Hadoop, Spark, Hive, Pig) and frameworks Extensive experience with programming languages such as Python, Scala, or Java Knowledge of data modeling and data warehousing concepts Familiarity with NoSQL databases like Cassandra or MongoDB Proficient in SQL for data querying and analysis Strong analytical and problem-solving skills Excellent communication and collaboration abilities Ability to work independently and effectively in a fast-paced environment Benefits Competitive salary and benefits package. Opportunity to work on cutting-edge technologies and solve complex challenges. Dynamic and collaborative work environment with opportunities for growth and career advancement. Regular training and professional development opportunities.
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Responsibilities Design, develop, and implement robust Big Data solutions using technologies such as Hadoop, Spark, and NoSQL databases Build and maintain scalable data pipelines for effective data ingestion, transformation, and analysis Collaborate with data scientists, analysts, and cross-functional teams to understand business requirements and translate them into technical solutions Ensure data quality and integrity through effective validation, monitoring, and troubleshooting techniques Optimize data processing workflows for maximum performance and efficiency Stay up-to-date with evolving Big Data technologies and methodologies to enhance existing systems Implement best practices for data governance, security, and compliance Document technical designs, processes, and procedures to support knowledge sharing across teams Requirements Bachelor's or Master's degree in Computer Science, Engineering, or a related field 4+ years of experience as a Big Data Engineer or in a similar role Strong proficiency in Big Data technologies (Hadoop, Spark, Hive, Pig) and frameworks Extensive experience with programming languages such as Python, Scala, or Java Knowledge of data modeling and data warehousing concepts Familiarity with NoSQL databases like Cassandra or MongoDB Proficient in SQL for data querying and analysis Strong analytical and problem-solving skills Excellent communication and collaboration abilities Ability to work independently and effectively in a fast-paced environment Benefits Competitive salary and benefits package. Opportunity to work on cutting-edge technologies and solve complex challenges. Dynamic and collaborative work environment with opportunities for growth and career advancement. Regular training and professional development opportunities.
Posted 1 week ago
10.0 - 15.0 years
15 - 20 Lacs
Bengaluru
Work from Office
Project description The Finance Market solutions team requires a Test Automation Lead to work on NeoBanking Project. Responsibilities Lead QA Engineering Initiatives: Design and build custom QA engineering tools and frameworks to accelerate quality automation across teams. Drive shift-left quality practices, enabling developers to perform integration and automation testing early in the SDLC through self-service tooling. Automation & Tooling: Architect and develop advanced test automation frameworks beyond traditional UI-based automation. Build tools for test evidence collection, reporting, and end-to-end traceability of requirements. Implement continuous integration and continuous testing pipelines tightly integrated with CI/CD systems. Performance Testing Expertise Lead performance testing efforts and create custom performance testing tools, leveraging and extending frameworks like Gatling to meet specific business needs. Cloud-Native QA Practices: Implement QA solutions tailored for cloud-native environments. Leverage technologies like Kubernetes (K8s), Docker, and microservices architectures to enable scalable and robust quality engineering practices. Collaboration & Leadership: Collaborate with cross-functional engineering teams to integrate quality processes and tools seamlessly into their workflows. Mentor and guide QA engineers and developers on best practices for testing and automation. Skills Must have 10+ years of experience in QA Engineering with a focus on test automation and tooling development. Strong software engineering skills in languages like Java, Scala. Experience in Investment Banking Project Proven experience building in-house QA tools and frameworks for integration, system, and performance testing. Solid experience with performance testing tools (e.g., Gatling, JMeter) and extending them with custom capabilities. Experience with web automation frameworks like Playwright, and mobile test automation frameworks (e.g., Appium, Detox) Expertise in cloud-native technologies: Kubernetes, Docker, Helm, and experience testing microservices architectures. Deep understanding of CI/CD pipelines, DevOps workflows, and infrastructure as code. Strong focus on end-to-end requirements traceability and tooling for evidence tracking and reporting. Experience in driving shift-left testing practices and enabling self-service quality automation for development teams. Excellent communication and leadership skills; ability to influence technical direction and quality culture across teams.
Posted 1 week ago
2.0 - 7.0 years
10 - 15 Lacs
Bengaluru
Work from Office
We are looking for a Data Engineer to join a new data engineering team in Bangalore supporting our existing software application. This is an amazing opportunity to help positively reshape, grow, and support our application functionally into new frameworks within our fully cloud native environment, leveraging the latest in serverless technologies. We have a great skill set in microservice development and we would love to speak with you if you have a Java ETL background and are looking to expand your skill set. About You experience, education, skills, and accomplishments Bachelors Degree in Computer Science or equivalent experience Minimum 2 years of experience developing software in Java Familiar with Hibernate, java server faces, Apache ant, XSLT. Familiarity in working with Unix/Linux environments Experience with AWS cloud services (EC2, S3, RDS)OR GCP OR Azure. Experience with SQL databases (eg: Oracle, MySQL etc) Good understanding of Oracle PL/SQL It would be great if you also had . . . Experience programming using TDD Experience with distributed microservice Experience using Databricks platform Experience using JVM languages (Groovy/Scala) What will you be doing in this role? Learn independently, and self-directed in researching solutions and building POC Collaborate and design with content team and providing guidance on integrating complex features that are cost effective Documenting relevant project functionality (API, architecture, etc) Sharing functionality with the rest of platform team Design and produce clean code About the Team Were a geo-diverse team software engineers that work together with our product team and integration partners to design and develop and support our robust and growing data-driven service portfolio. The services we create, and support, enable our internal customers to provide the most up-to-date data to their customers, helping drive innovation. Hours of Work This is a permanent position with Clarivate, 9 hours per day including lunch break. the teams are globally distributed hence you should be flexible with working hours.
Posted 1 week ago
6.0 - 11.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Hiring Data Engineer in Bangalore with 6+ years experience in below skills: Must Have: - Big Data technologies: Hadoop, MapReduce, Spark, Kafka, Flink - Programming languages: Java/ Scala/ Python - Cloud: Azure, AWS, Google Cloud - Docker/Kubernetes Required Candidate profile - Strong in Communication Skills - Experience with relational SQL/ NoSQL databases- Postgres & Cassandra - Experience with ELK stack - Immediate Join is plus - Must be ready to work from office
Posted 1 week ago
4.0 - 9.0 years
6 - 10 Lacs
Hyderabad, Bengaluru, Secunderabad
Work from Office
We are looking for a Senior Software Engineer to join our IMS Team in Bangalore. This is an amazing opportunity to work on Big Data technologies involved in content ingestion. The team consists of 10-12 engineers and is reporting to the Sr Manager. We have a great skill set in Spark, Java, Scala, Hive, Sql, XSLT, AWS EMR, S3, etc and we would love to speak with you if you have skills in the same. About You experience, education, skills, and accomplishments: Work Experience: Minimum 4 years experience in Big Data projects involved in content ingestion, curation, transformation Technical Skill: Spark, Python/Java, Scala, AWS EMR, S3, SQS, Hive, XSLT Education (bachelors degree in computer science, mechanical engineering, or related degree or at least 4 years of equivalent relevant experience) It would be great if you also had: Experience in analyzing and optimizing performance Exposure to any automation test frameworks Databricks Java Python programming What will you be doing in this role? Active role in planning, estimation, design, development and testing of large-scale, enterprise-wide initiatives to build or enhance a platform or custom applications that will be used for the acquisition, transformation, entity extraction, mining of content on behalf of business units across Clarivate Analytics Troubleshooting and addressing production issues within the given SLA Coordination with global representatives and teams
Posted 1 week ago
3.0 years
0 Lacs
Bhubaneswar, Odisha, India
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to ensure the applications function as intended, while continuously seeking opportunities for improvement and efficiency in your work. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application specifications and user guides. - Engage in code reviews to ensure quality and adherence to best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data integration and ETL processes. - Experience with cloud computing platforms and services. - Familiarity with programming languages such as Python or Scala. - Knowledge of data visualization techniques and tools. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Bhubaneswar office. - A 15 years full time education is required.
Posted 1 week ago
8.0 - 10.0 years
6 - 10 Lacs
Bengaluru
Work from Office
distributed data technologies,Hadoop, MapReduce, Spark, Kafka, Flink etc.for building efficient, large-scale ‘big data’ pipelines,Java, Scala, Python or equivalent,stream-processing applications using Apache Flink, Kafka. AWS,Azure,Google Cloud
Posted 1 week ago
4.0 - 7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Data Engineer Location: Hyderabad, India Employment Type: Full-time Experience : 4 to 7 Years About NationsBenefits: At NationsBenefits, we are leading the transformation of the insurance industry by developing innovative benefits management solutions. We focus on modernizing complex back-office systems to create scalable, secure, and high-performing platforms that streamline operations for our clients. As part of our strategic growth, we are focused on platform modernization — transitioning legacy systems to modern, cloud-native architectures that support the scalability, reliability, and high performance of core back-office functions in the insurance domain. Position Overview: We are seeking a self-driven Data Engineer with 4–7 years of experience to build and optimize scalable ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. The role involves working across scrum teams to develop data solutions, ensure data governance with Unity Catalog, and support real-time and batch processing. Strong problem-solving skills, T-SQL expertise, and hands-on experience with Azure cloud tools are essential. Healthcare domain knowledge is a plus. Job Description: Work with different scrum teams to develop all the quality database programming requirements of the sprint. Experience in Azure cloud platforms like Advanced Python Programming, Databricks , Azure SQL , Data factory (ADF), Data Lake, Data storage, SSIS. Create and deploy scalable ETL/ELT pipelines with Azure Databricks by utilizing PySpark and SQL . Create Delta Lake tables with ACID transactions and schema evolution to support real-time and batch processing. Experience in Unity Catalog for centralized data governance, access control, and data lineage tracking. Independently analyse, solve, and correct issues in real time, providing problem resolution end-to-end. Develop unit tests to be able to test them automatically. Use SOLID development principles to maintain data integrity and cohesiveness. Interact with product owner and business representatives to determine and satisfy needs. Sense of ownership and pride in your performance and its impact on company’s success. Critical thinker and problem-solving skills. Team player. Good time-management skills. Great interpersonal and communication skills. Mandatory Qualifications: 4-7 years of experience as a Data Engineer. Self-driven with minimal supervision. Proven experience with T-SQL programming, Azure Databricks, Spark (PySpark/Scala), Delta Lake, Unity Catalog, ADLS Gen2 Microsoft TFS, Visual Studio, Devops exposure. Experience with cloud platforms such as Azure or any. Analytical, problem-solving mindset. Preferred Qualifications HealthCare domain knowledge
Posted 1 week ago
0.0 years
0 Lacs
Mohali, Punjab
On-site
Job Title: Software Engineer (SDE) Intern Company: DataTroops Location: Mohali (Punjab) - Work From Office Shift: As per client decision (Day) Shift About the Role: We are looking for a highly motivated Software Development Engineer (SDE) Intern to join our dynamic team. As an intern, you will have the opportunity to work on challenging projects, solve real-world problems, and gain exposure to full-stack development. You will work closely with experienced engineers and learn best practices in software development while honing your problem-solving and technical skills. Key Responsibilities: Collaborate with cross-functional teams to design, develop, test, and maintain end-to-end web applications . Develop scalable and maintainable backend services using Scala, JavaScript, or other languages as required. Build responsive, user-friendly frontend interfaces using modern frameworks and ensure seamless user experience. Write clean, efficient, and well-documented code across the stack. Participate in code reviews , ensuring best practices in software development and architecture. Design and implement automated unit and integration tests to ensure code quality and reliability. Contribute to CI/CD pipelines and assist in deployment, monitoring, and debugging of applications in development and production environments. Optimize systems for performance, security, and scalability . Work closely with QA to identify bugs and implement fixes proactively. Ensure high availability and system reliability through DevOps practices and monitoring tools. Stay updated with emerging trends in full stack development , cloud platforms, and infrastructure automation. Demonstrate strong collaboration and communication skills to work efficiently across engineering, QA, and DevOps teams. Required Skills: Strong problem-solving skills with a deep understanding of data structures and algorithms . Proficiency in one or more programming languages: Java, C, C++, Python . Exposure to or willingness to learn technologies like React, Node.js, Play Framework, MongoDB, PostgreSQL , etc. Familiarity with core computer science concepts : operating systems, databases, networking, etc. Basic understanding or interest in cloud services , CI/CD , and containerization tools (e.g., Docker, GitHub Actions). A self-starter with a passion for learning new technologies and working in fast-paced, dynamic environments. Strong communication, collaboration, and team-player attitude. NOTE: Only BCA (2025 pass-out) or B.Tech (2026 pass-out) can apply. Compensation: The salary for this internship position will be determined based on the candidate's experience, skills, and performance during the interview process. How to Apply: If you're ready to take on new challenges and grow with us, send your resume to hr@datatroops.io Note: Only candidates based in the Tricity area or willing to relocate to Mohali will be considered for this role. Job Types: Full-time, Fresher, Internship Job Types: Full-time, Fresher, Internship Pay: ₹1.00 per hour Schedule: Monday to Friday Weekend availability Work Location: In person
Posted 1 week ago
50.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Who we are: Irdeto is the world leader in digital platform cybersecurity, empowering businesses to innovate for a secure, connected future. Building on over 50 years of expertise in security, Irdeto’s services and solutions protect revenue, enable growth and fight cybercrime in video entertainment, video games, and connected industries including transport, health and infrastructure. Irdeto is the security partner dedicated to empowering a secure world where people can connect with confidence. With teams and offices around the world, Irdeto’s greatest asset is its people - our diversity is celebrated through an inclusive workplace, where everyone has an equal opportunity to drive innovation and contribute to Irdeto's success. The Role: As a Data Engineer Intern, you will work closely with the data engineering team to support the development, maintenance, and optimization of data infrastructure and pipelines. Your primary focus will be on data integration, data transformation, and data quality, enabling the efficient and reliable flow of data across various systems. This internship offers a valuable opportunity to gain hands-on experience in the field of data engineering while contributing to real-world projects and initiatives. Key Responsibilities: Data Integration: Collaborate with data engineering and data science teams to understand data requirements and source data from various internal and external systems. Implement data extraction, transformation, and loading (ETL) processes to ensure a smooth flow of data into the data warehouse. Data Pipeline Development: Assist in designing and building scalable, robust, and maintainable data pipelines that collect, process, and store data from different sources. Work with technologies like Apache Spark, Apache Kafka, or other relevant tools. Data Quality Assurance: Contribute to the development and implementation of data quality checks and validation procedures to ensure data accuracy, consistency, and completeness. Database Management: Help manage and maintain databases, ensuring data integrity, security, and high availability. Monitor database performance and troubleshoot issues as they arise. Documentation: Document data engineering processes, data pipelines, and system configurations to facilitate knowledge sharing and ensure that the data infrastructure remains well-documented. Automation: Identify opportunities to automate repetitive tasks and processes within data engineering workflows, enhancing efficiency and reducing manual efforts. Data Governance: Adhere to data governance and security policies, maintaining data privacy and confidentiality throughout all data-related activities. Collaborative Projects: Collaborate with cross-functional teams on data-related projects, supporting their data needs and contributing to the success of company initiatives. Continuous Learning: Stay updated with the latest advancements in data engineering technologies and practices, and proactively apply this knowledge to improve data engineering processes. Requirements: Enrolled in a bachelor’s or master’s degree program in Computer Science, Data Science, Information Technology, or a related field. Familiarity with programming languages like Python, Java, or Scala. Basic understanding of databases and SQL. Knowledge of data integration and ETL concepts. Experience with big data technologies is a plus. Analytical mindset and attention to detail. Strong problem-solving skills and the ability to work independently and in a team. Good communication skills and the ability to collaborate effectively with various stakeholders. What you can expect from us: We invest in our talented employees and promote collaboration, creativity, and innovation while supporting health and well-being across our global workforce. In addition to competitive remuneration, we offer: A multicultural and international environment where diversity is celebrated Professional education opportunities and training programs Innovation sabbaticals Volunteer Day State-of-the-art office spaces Additional perks tailored to local offices (e.g., on-site gyms, fresh fruit, parking, yoga rooms, etc.) Equal Opportunity at Irdeto Irdeto is proud to be an equal opportunity employer. All decisions are based on qualifications and business needs, and we do not tolerate discrimination or harassment. We welcome applications from individuals with diverse abilities and provide accommodation during the hiring process upon request. If you’re excited about this role but don’t meet every qualification, we encourage you to apply. We believe diverse perspectives and experiences make our teams stronger. Welcome to Irdeto!
Posted 1 week ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
The Impact: Design, build, and measure complex ELT jobs to process disparate data sources and form a high integrity, high quality, clean data asset. Executes and provides feedback for data modeling policies, procedure, processes, and standards. Assists with capturing and documenting system flow and other pertinent technical information about data, database design, and systems. Develop comprehensive data quality standards and implement effective tools to ensure data accuracy and reliability. Collaborate with various Investment Management departments to gain a better understanding of new data patterns. Collaborate with Data Analysts, Data Architects, and BI developers to ensure design and development of scalable data solutions aligning with business goals. Translate high-level business requirements into detailed technical specs. The Minimum Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Information Systems or related field. Experience: 7-9 years of experience with data analytics, data modeling, and database design. 3+ years of coding and scripting (Python, Java, Scala) and design experience. 3+ years of experience with Spark framework. 5+ Experience with ELT methodologies and tools. 5+ years mastery in designing, developing, tuning and troubleshooting SQL. Knowledge of Informatica Power center and Informatica IDMC. Knowledge of distributed, column- orientated technology to create high-performant database technologies like - Vertica, Snowflake. Strong data analysis skills for extracting insights from financial data Proficiency in reporting tools (e.g., Power BI, Tableau).
Posted 1 week ago
18.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Hi All Greetings from Live Connections! We have an urgent requirement on Regional Head of Engineering role with one of our MNC based company in Bangalore/ Pune Location. Please find the below job description and kindly share me your updated CV to sharmila@liveconnections.in Position Title: Regional Head of Engineering role Experience Level: 18+ Years Duration: Full Time Location: Bangalore/ Pune Full Time Your skills and experience Demonstrable experience leading multiple development teams Hands on polyglot software development experience (Java/Kotlin/Scala or similar) String architectural and design skills including relevant experience with cloud technologies, agile and DevOps practices, database technologies and platforms Proven track record applying modern standards and rigor to engineering teams, coaching and mentoring towards measurable results. Familiarity of DORA, SPACE and related research a plus Strong communicator and strategist, able to work at senior level and at the deep technical level Strong organization skills, project and programmed management experience Regards, Sharmila sharmila@liveconnections.in
Posted 1 week ago
0.0 - 6.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Role: Senior Analyst - Data Engineering Experience: 4 to 6 years Location: Chennai, Tamil Nadu , India (CHN) Job Description: A highly skilled and motivated Senior Engineer with deep expertise in the Databricks platform to join our growing data engineering and analytics team. As a Senior Engineer, you will play a crucial role in designing, building, and optimizing our data pipelines, data lakehouse solutions, and analytics infrastructure on Databricks. You will collaborate closely with data scientists, analysts, and other engineers to deliver high-quality, scalable, and reliable data solutions that drive business insights and decision-making. Job Responsibilities: Design, develop, and maintain scalable and robust data pipelines and ETL/ELT processes using Databricks, Spark (PySpark, Scala), Delta Lake, and related technologies. Architect and implement data lakehouse solutions on Databricks, ensuring data quality, integrity, and performance. Develop and optimize data models for analytical and reporting purposes within the Databricks environment. Implement and manage data governance and security best practices within the Databricks platform, including Unity Catalog and RBAC. Utilize Databricks Delta Live Tables (DLT) to build and manage reliable data pipelines. Implement and leverage Change Data Feed (CDF) for efficient data synchronization and updates. Monitor and troubleshoot data pipelines and system performance on the Databricks platform. Collaborate with data scientists and analysts to understand their data requirements and provide efficient data access and processing solutions. Participate in code reviews, ensuring adherence to coding standards and best practices. Contribute to the development of technical documentation and knowledge sharing within the team. Stay up-to-date with the latest advancements in Databricks and related data technologies. Mentor and guide junior engineers on the team. Participate in the planning and execution of data-related projects and initiatives. Skills Required: Databricks, SQL, Pyspark, Python Data modeling, DE concepts Job Snapshot Updated Date 24-07-2025 Job ID J_3897 Location Chennai, Tamil Nadu, India Experience 4 - 6 Years Employee Type Permanent
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana
Remote
Software Engineer II (Data) Hyderabad, Telangana, India Date posted Jul 24, 2025 Job number 1850170 Work site Up to 50% work from home Travel 0-25 % Role type Individual Contributor Profession Software Engineering Discipline Software Engineering Employment type Full-Time Overview Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers to levels they cannot achieve anywhere else. This is a world of more possibilities, more innovation, more openness in a cloud-enabled world. The Business & Industry Copilots group is a rapidly growing organization that is responsible for the Microsoft Dynamics 365 suite of products, Power Apps, Power Automate, Dataverse, AI Builder, Microsoft Industry Solution and more. Microsoft is considered one of the leaders in Software as a Service in the world of business applications and this organization is at the heart of how business applications are designed and delivered. This is an exciting time to join our group BIC Customer Experience and work on something highly strategic to Microsoft. The goal of the Customer Zero Engineering is to build the next generation of our applications running on Dynamics 365, AI, Copilot, and several other Microsoft cloud services to drive AI transformation across Marketing, Sales, Services and Support organizations within Microsoft. We innovate quickly and collaborate closely with our partners and customers in an agile, high-energy environment. Leveraging the scalability and value from Azure & Power Platform, we ensure our solutions are robust and efficient. Our organization’s implementation acts as reference architecture for large companies and helps drive product capabilities. If the opportunity to collaborate with a diverse engineering team, on enabling end-to-end business scenarios using cutting-edge technologies and to solve challenging problems for large scale 24x7 business SaaS applications excite you, please come and talk to us! We are looking for talented and motivated data engineers interested in helping our organization empower learners through producing valuable data that can be used to understand the organization's needs to make the right decisions. We want you for your passion for technology, your curiosity and willingness to learn, your ability to communicate well in a team environment, your desire to make our team better with your contributions, and your ability to deliver. We use industry-standard technology: C#, JavaScript/Typescript, HTML5, ETL/ELT, Data warehousing, and/ or Business Intelligence Development. Qualifications Required: Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. 3+ years experience in business analytics, software development, data modeling or data engineering work. Software development using languages like C#, JavaScript or Java. Experience using a variety of data stores, including data warehouses, RDBMS, in-memory caches, and document Databases. Proficiency with SQL and NoSQL and hands-on experience using distributed computing platforms. Experience developing on cloud platforms (i.e. Azure, AWS) in a continuous delivery environment. Strong problem solving, design, implementation, and communication skills. Strong intellectual curiosity and passion for learning new technologies. Preferred Qualifications: Experience with data engineering projects with firm sense of accountability and ownership. Experience in ETL/ELT, Data warehousing, data pipelines and/ or Business Intelligence Development. Experience using ML, anomaly detection, predictive analysis, exploratory data analysis. A strong understanding of the value of Data, data exploration and the benefits of a data-driven organizational culture. Business Intelligence experience or visualization with tools such as Power BI is also beneficial. Experience implementing data systems in C#/Python/Scala or similar. Working knowledge of any (or multiple) of the following tech stacks is a plus: SQL, Databricks, PySparkSQL, Azure Synapse, Azure Data Factory, Azure Fabric, or similar. Basic Knowledge of Microsoft Dynamics Platform will be an added advantage. #BICJobs Responsibilities Implement scalable data models, data pipelines, data storage, management, and transformation solutions for real-time decisioning, reporting, data collecting, and related functions. Leveraging machine learning(ML) models knowledge and implement appropriate solutions for business objectives. Ship high-quality, well-tested, secure, and maintainable code. Develop and maintain software designed to improve data governance and security. Troubleshoot and resolve issues related to data processing and storage. Collaborate effectively with teammates, other teams and disciplines and drive improvements in engineering. Creates and implements code for a product, service, or feature, reusing code as applicable. Contributes to efforts to break down larger work items into smaller work items and provides estimation. Troubleshooting live site issues as part of both product development and Designated Responsible Individual (DRI) during live site rotations. Remains current in skills by investing time and effort into staying abreast of latest technologies. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. Industry leading healthcare Educational resources Discounts on products and services Savings and investments Maternity and paternity leave Generous time away Giving programs Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 week ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
At Takeda, we are guided by our purpose of creating better health for people and a brighter future for the world. Every corporate function plays a role in making sure we — as a Takeda team — can discover and deliver life-transforming treatments, guided by our commitment to patients, our people and the planet. People join Takeda because they share in our purpose. And they stay because we’re committed to an inclusive, safe and empowering work environment that offers exceptional experiences and opportunities for everyone to pursue their own ambitions. Job ID R0152063 Date posted 07/24/2025 Location Bengaluru, Karnataka I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’sPrivacy Noticeand Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description About the role: The Platform Engineer is responsible for building large-scale systems and creating robust and scalable platforms used by every team. This includes building the data transport, collection, and orchestration layers. These efforts will allow accessibility to business and user behavior insights. How you will contribute: Build and maintain high-performance, fault-tolerant, secure, and scalable platforms Partner with architects and business leaders to design and build robust services using storage layer, streaming, and batch data Think through long-term impacts of key design decisions and handling failure scenarios Form a holistic understanding of tools, key business concepts (data tables), and the data dependencies and team dependencies Help drive storage layer and API features roadmap as well as be responsible for the overall engineering (design, implementation, and testing) Build self-service platforms to empower users Lead development of high leverage projects and capabilities of the platform Skills and qualifications: Utilizes DevSecOps tools and methodologies to enhance security and operational processes Designs and implements effective data models to improve data accessibility and utility Applies Agile and SDLC methodologies to optimize software development life cycle Understands and manipulates data structures and algorithms to solve complex problems Works with distributed data technologies such as Spark and Hadoop for large-scale data processing Integrates multiple systems ensuring seamless data flow and functionality Proficient in data engineering programming languages including SQL, Python, Scala, Java, and C++ Deploys and manages applications on cloud platforms using tools like Kubernetes As an entry-level professional, you will tackle challenges within a focused and manageable scope. Your role is pivotal in applying core theories and concepts to practical scenarios, reflecting a seamless transition from academic excellence to professional application. You will harness standard methodologies to evaluate situations and data, cultivating a budding understanding of industry practices. Typically, this role requires a bachelor or college degree or the equivalent professional experience. Your role is characterized by growth and learning, while your journey within Takeda will evolve, fostering valuable internal relationships. BENEFITS It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are: Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Health & Wellness programs including annual health screening, weekly health sessions for employees. Employee Assistance Program 3 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs Reimbursements – Home Internet & Mobile Phone Employee Referral Program Company-Provided Transport: Available at scheduled times for 2nd shift employees for a smooth commute to office and back home Security Escort for Drop-Off: Female employees receive a security escort to their designated drop-off point after the shift for safety Shift Allowance: Additional shift allowance will be provided for hours worked outside regular working hours Food/Meal: Meal will be provided for the 2nd shift employees Leaves – Paternity Leave (4 Weeks) , Maternity Leave (up to 26 weeks), Bereavement Leave (5 calendar days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. Locations IND - Bengaluru Worker Type Employee Worker Sub-Type Regular Time Type Full time
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
chennai, tamil nadu
On-site
The Applications Development Technology Lead Analyst role is a senior position where you will be responsible for implementing new or updated application systems and programs in collaboration with the Technology team. Your main objective will be to lead applications systems analysis and programming activities. Your responsibilities will include partnering with various management teams to ensure the integration of functions to achieve goals, identifying necessary system enhancements for new products and process improvements, resolving high-impact problems/projects by evaluating complex business processes, providing expertise in applications programming, ensuring application design aligns with the architecture blueprint, developing standards for coding, testing, debugging, and implementation, gaining comprehensive knowledge of business areas integration, analyzing issues to develop innovative solutions, advising mid-level developers and analysts, assessing risks in business decisions, and being a team player who can adapt to changing priorities. The required skills for this role include strong knowledge in Spark using Java/Scala & Hadoop Ecosystem with hands-on experience in Spark Streaming, proficiency in Java Programming with experience in the Spring Boot framework, familiarity with database technologies such as Oracle, Starburst & Impala query engine, and knowledge of bank reconciliations tools like Smartstream TLM Recs Premium / Exceptor / Quickrec is an added advantage. To qualify for this position, you should have 10+ years of relevant experience in Apps Development or systems analysis role, extensive experience in system analysis and programming of software applications, experience in managing and implementing successful projects, be a Subject Matter Expert (SME) in at least one area of Applications Development, ability to adjust priorities quickly, demonstrated leadership and project management skills, clear and concise communication skills, experience in building/implementing reporting platforms, possess a Bachelor's degree/University degree or equivalent experience (Master's degree preferred). This job description is a summary of the work performed, and other job-related duties may be assigned as needed.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France