Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
karnataka
On-site
The Vontiers Data & Analytics Hub is in search of an experienced Snowflake Data Engineer to become a valuable member of our team. As a Data Engineer, you will play a crucial role in the creation, enhancement, and upkeep of data pipelines and data models using the Snowflake cloud data platform. Your primary responsibilities will include designing, developing, and maintaining data pipelines and data models utilizing the Snowflake cloud data platform. You will also be responsible for implementing best practices related to data quality, security, and performance. Collaboration with data analysts, data scientists, and business stakeholders to grasp data requirements and provide solutions is a key aspect of this role. Additionally, offering technical guidance and mentorship to junior data engineers and staying abreast of the latest trends and technologies in the data engineering field are part of your duties. To qualify for this position, you should hold a Bachelor's degree in computer science, engineering, or a related field along with a minimum of 5 years of experience in data engineering, preferably within a cloud environment. Proficiency in SQL and Python, familiarity with cloud platforms like AWS or Azure, and hands-on experience with Snowflake data warehouse are essential requirements. You should also have expertise in ETL/ELT processes, data modeling, data warehousing concepts, and performance tuning in Snowflake. It would be advantageous if you possess certifications in Snowflake, experience with data visualization tools such as Power BI, and familiarity with finance, procurement, and manufacturing processes. Additionally, knowledge of MLOps and decision sciences applications utilizing Snowpark compute, Snowflake Model registries, and Snowflake Feature registries would be beneficial. Vontier (NYSE: VNT) is a global industrial technology company that integrates productivity, automation, and multi-energy technologies to cater to the needs of an evolving mobility ecosystem. With a culture focused on continuous improvement and innovation, Vontier provides a dynamic, innovative, and inclusive environment where personal growth, work-life balance, and collaboration are valued. Join us in our commitment to enabling the way the world moves by contributing to meaningful change and driving innovation personally and professionally. Let's navigate challenges and seize opportunities together at Vontier!,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
You should have 5-7 years of experience in Java development and a Bachelor's Degree in Computer Science or a related field. Strong expertise in Java/J2EE technologies is required, along with proficiency in web frontend technologies like HTML, JavaScript, and CSS. You should have knowledge of Java frameworks like Spring MVC and Spring Security, experience with REST APIs, and writing Python libraries. Familiarity with databases (e.g., MySQL, Oracle) and SQL is essential, as well as strong scripting skills in languages like Python, Perl, or Bash. Experience in backend programming with Java/Python/Scala and ability to work on full-stack development using Java technologies are important. Your responsibilities will include designing and developing Java services using the Java Spring Boot framework, implementing, supporting, troubleshooting, and maintaining applications, developing high-standard SAS/Python code and model documentation, working on the release cycle of modern, Java-based web applications, developing automation scripts using Python, and writing efficient, reusable, and reliable Java code. Required Skills: - Strong Java programming skills - Experience with Java Spring framework and Hibernate - Proficient in developing microservices using Java - Knowledge of design patterns and Java frameworks - Hands-on experience with front-end and back-end Java technologies - Familiarity with automation tools like Selenium and Protractor - Good understanding of web services and RESTful APIs - Experience in using ORM frameworks like Hibernate/JPA Additional Skills: - Proficiency in Python or relevant scripting languages - Experience in web/mobile application development - Understanding of high-level JavaScript concepts - Ability to work with automation tools for testing - Knowledge of machine learning, AI, or data science is a plus This is a full-time position with benefits including health insurance and Provident Fund. You should be comfortable relocating to Bangalore and willing to travel. The job type is in-person, with a day shift schedule from Monday to Friday. Master's degree is preferred, and experience in Core Java (6 years), Spring Boot (4 years), and Python (3 years) is required.,
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Machine Learning Engineer at Expedia Group, you will have the opportunity to work in a cross-functional geographically distributed team of Machine Learning engineers and ML Scientists. Your role will involve designing and coding large scale batch and real-time pipelines on the Cloud. You will be responsible for prototyping creative solutions quickly, developing minimum viable products, and collaborating with seniors and peers to implement the technical vision of the team. In this role, you will act as a point of contact for junior team members, offering advice and direction. You will actively participate in all phases of the end-to-end ML model lifecycle for enterprise applications projects, collaborating with a global team of data scientists, administrators, data analysts, data engineers, and data architects on production systems and applications. Additionally, you will work closely with cross-functional teams to integrate generative AI solutions into existing workflow systems. Your responsibilities will also include participating in code reviews to assess overall code quality and flexibility, defining, developing, and maintaining artifacts like technical design or partner documentation, as well as maintaining, monitoring, supporting, and improving solutions and systems with a focus on service excellence. To be successful in this role, you should have a degree in software engineering, computer science, informatics, or a similar field, with at least 5+ years of experience for Bachelor's degree holders or 3+ years for Master's degree holders. You should be comfortable programming in Python (Primary) and Scala (Secondary) and have hands-on experience with OOAD, design patterns, SQL, and NoSQL. Knowledge in big data technologies such as Spark, Hive, Hue, and Databricks is essential. Experience in developing and deploying Batch and Real-Time Inferencing applications is also required. Additionally, you should have a good understanding of machine learning pipelines and the ML Lifecycle, traditional ML algorithms, and Gen-AI tools/tech-stack. Experience with cloud services (e.g., AWS) and workflow orchestration tools (e.g., Airflow) is preferred. Passion for learning, especially in the areas of micro-services, system architecture, Data Science, and Machine Learning, as well as experience working with Agile/Scrum methodologies, are also desired. If you need assistance with any part of the application or recruiting process due to a disability, physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. Expedia Group is committed to creating an inclusive and diverse work environment where everyone belongs and differences are celebrated.,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
We are seeking a skilled and seasoned Senior Data Engineer to become a part of our innovative team. The perfect candidate will possess a solid foundation in data engineering and proficiency in Azure, Azure Data Factory (ADF), Azure Fabric, Databricks, and Snowflake. This position necessitates the creation, development, and upkeep of data pipelines, ensuring data quality and accessibility, and collaborating with various teams to support our data-centric initiatives. Your responsibilities will include designing, developing, and maintaining robust data pipelines utilizing Azure Data Factory, Azure Fabric, Databricks, and Snowflake. You will work closely with data scientists, analysts, and stakeholders to comprehend data requirements and guarantee the availability and quality of data. Implementing and refining ETL processes to handle the ingestion, transformation, and loading of data from diverse sources into data warehouses, data lakes, and Snowflake will also be a key aspect of your role. Additionally, you will be responsible for upholding data integrity and security through the implementation of best practices and compliance with data governance policies. Monitoring and resolving data pipeline issues to ensure the timely and accurate delivery of data, as well as enhancing data storage and retrieval processes to boost performance and scalability, will be essential tasks. It is crucial to stay abreast of industry trends and best practices in data engineering and cloud technologies. Furthermore, you will have the opportunity to mentor and provide guidance to junior data engineers, offering technical expertise and assistance as required. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Information Technology, or a related field, along with over 5 years of experience in data engineering, with a strong emphasis on Azure, ADF, Azure Fabric, Databricks, and Snowflake. Proficiency in SQL, experience in data modeling and database design, and strong programming skills in Python, Scala, or Java are also essential. Familiarity with big data technologies like Apache Spark, Hadoop, and Kafka, as well as a solid grasp of data warehousing concepts and experience with data warehousing solutions (e.g., Azure Synapse Analytics, Snowflake) is required. Knowledge of data governance, data quality, and data security best practices, excellent problem-solving abilities, and effective communication and collaboration skills within a team setting are all highly valued. Preferred qualifications include experience with other Azure services such as Azure Blob Storage, Azure SQL Database, and Azure Cosmos DB, familiarity with DevOps practices and tools for CI/CD in data engineering, as well as certifications in Azure Data Engineering, Snowflake, or related fields.,
Posted 5 days ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
As a Software Engineer, you will play a crucial role in developing various tools and features for Data and ML platforms at the organization. Your responsibilities will include working on projects related to data processing, insights portal, data observability, data lineage, model hub, and data visualization. You will have the opportunity to either create custom solutions from scratch or customize existing open-source products to meet the specific needs of the company. The ideal candidate for this role is someone who enjoys tackling challenges, solving problems creatively, excels in collaborative environments, and is capable of delivering high-quality software within tight deadlines and constraints. This position will involve the development of innovative tools and frameworks that can enhance the capabilities of third-party BI tools through APIs. To qualify for this role, you should have a minimum of 4 years of hands-on experience with programming languages such as Java, Python, or Scala. Additionally, you should possess expertise in designing and implementing scalable microservices and Rest APIs, as well as hands-on experience with SQL and NoSQL databases. Experience in building and deploying cloud-native applications on platforms like AWS, GCP, or others is essential. Proficiency in DevOps tools, containers, and Kubernetes is also required. Strong communication and interpersonal skills are necessary for effective collaboration with cross-functional teams, along with a strong sense of ownership towards project deliverables. Preferred qualifications for this role include knowledge of Big Data technologies and platforms, familiarity with distributed computing frameworks like Spark, and experience with SQL query engines such as Trino and Hive. Previous exposure to AI/ML and Data Sciences domains would be advantageous, as well as proficiency in JavaScript libraries and frameworks like React. Experience with Business Intelligence (BI) platforms like Tableau, ThoughtSpot, and Business Objects is considered a plus. In terms of education and experience, a relevant academic background coupled with hands-on experience in software development is preferred. The successful candidate should be proactive, detail-oriented, and capable of working effectively in a fast-paced environment. If you are passionate about developing cutting-edge solutions in the field of data and ML, this role offers an exciting opportunity to contribute to impactful projects and drive innovation within the organization.,
Posted 5 days ago
5.0 years
0 Lacs
India
On-site
About the Role We are looking for a New-Age Data Engineer who can go beyond traditional pipelines and work in a fast-paced AI-first eCommerce environment. You will play a pivotal role in designing, building, and maintaining robust data architectures that power integrations with multiple marketplaces (like Amazon, Flipkart, Shopify, Meesho, and others) and enable real-time decisioning through AI/ML systems. If you’re passionate about eCommerce, love solving complex data problems, and enjoy working at the intersection of APIs, cloud infrastructure, and AI, this role is for you. Key Responsibilities Design and implement scalable, fault-tolerant data pipelines for ingestion, transformation, and synchronization across multiple marketplaces Integrate and manage APIs and webhook systems for real-time data capture from external platforms Collaborate with AI/ML teams to serve structured and semi-structured data for model training and inference Build and maintain data lakes and data warehouses with efficient partitioning and schema design Ensure data reliability, accuracy, governance, and lineage tracking Automate monitoring and anomaly detection using modern observability tools Optimize data infrastructure costs on cloud platforms (AWS/GCP/Azure) Key Qualifications 2–5 years of experience in data engineering or backend systems with large-scale data Strong programming skills in Python or Scala (bonus: familiarity with TypeScript/Node.js) Deep understanding of SQL and NoSQL databases (PostgreSQL, MongoDB, etc.) Experience with distributed data processing tools like Apache Spark, Kafka, or Airflow Proficiency in using APIs, webhooks, and event-driven data architectures Experience working with cloud-native data tools (e.g., AWS Glue, S3, Redshift, BigQuery, or Snowflake) Solid grasp of data modeling, ETL/ELT design, and performance tuning Bonus: Familiarity with data versioning tools (e.g., DVC, LakeFS), AI pipelines, or vector databases Nice to Have Experience in the eCommerce or retail tech domain Exposure to MLOps and working with feature stores Knowledge of GraphQL, gRPC, or modern API paradigms Interest or prior experience in real-time recommender systems or personalization engines What We Offer Work in a high-growth, AI-native eCommerce tech environment Autonomy to choose tools, propose architectures, and shape the data roadmap Opportunity to work closely with AI scientists, product managers, and integration teams Flexible work location, inclusive culture, and ownership-driven roles 📩 Interested candidates can send their resume to : [careers@kartavyatech.com] ✉️ Subject Line: Application – Data Engineer (AI x eCommerce)
Posted 5 days ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Senior Data Engineer at Buildnetic, a Singapore HQ company located in Bangalore, you will leverage your 8 to 12 years of experience to play a crucial role in designing, implementing, and managing data infrastructure that drives data-driven decision-making processes. In this hybrid role, you will collaborate with cutting-edge technologies to construct data pipelines, architect data models, and uphold data integrity. Your key responsibilities will include designing, developing, and maintaining scalable data pipelines and architectures, working with large datasets to create efficient ETL processes, and partnering with data scientists, analysts, and stakeholders to discern business requirements. Ensuring data quality through cleaning, validation, and profiling, implementing data models for optimal performance in data warehousing and data lakes, and managing cloud data infrastructure on platforms like AWS, Azure, or GCP will be essential aspects of your role. You will work with a variety of programming languages including Python, SQL, Java, and Scala, alongside data warehousing and data lakes tools such as Snowflake, Redshift, Databricks, Hadoop, Hive, and Spark. Your expertise in data modeling techniques, ETL tools like Informatica and Talend, and management of both NoSQL and relational databases will be critical. Additionally, experience with CI/CD pipelines, Git for version control, troubleshooting complex data infrastructure issues, and proficiency in Linux/Unix systems will be advantageous. If you possess strong problem-solving skills, effective communication abilities, and prior experience working in a hybrid work environment, Buildnetic offers you an opportunity to be part of a forward-thinking company that prioritizes innovation and technological advancement. You will collaborate with a talented and collaborative team, enjoy a flexible hybrid working model, and receive a competitive salary and benefits package. If you are passionate about data engineering and eager to work with the latest technologies, we look forward to hearing from you.,
Posted 5 days ago
6.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Skills: Java, Spring Boot, SQL, microservices, coding, messaging queue, 6+ years hands-on experience in Java. Experience in building Order and Execution Management, Trading systems is required Financial experience and exposure to Trading In depth understanding of concurrent programming and experience in designing high throughput, high availability, fault tolerant distributed applications is required. Experience in building distributed applications using NoSQL technologies like Cassandra, coordination services like Zookeeper, and caching technologies like Apache Ignite and Redis strongly preferred Experience in building micro services architecture / SOA is required. Experience in message-oriented streaming middleware architecture is preferred (Kafka, MQ, NATS, AMPS) Experience with orchestration, containerization, and building cloud native applications (AWS, Azure) is a plus Experience with modern web technology such as Angular, React, TypeScript a plus Strong analytical and software architecture design skills with an emphasis on test driven development. Experience in programming languages such as Scala, python would be a plus. Experience in using Project Management methodologies such as Agile/Scrum Effective communication and presentation skills (written and verbal) are required Bachelors or masters degree in computer science or engineering Good Communication skills
Posted 5 days ago
4.0 - 12.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Data Engineer We at Pine Labs are looking for those who share our core belief - Every Day is Game day. We bring our best selves to work each day to realize our mission of enriching the world through the power of digital commerce and financial services. Role Purpose We are looking for skilled Data Engineers with 4-12 years of experience to join our growing team. You will design, build, and optimize real-time and batch data pipelines, leveraging AWS cloud technologies and Apache Pinot to enable high-performance analytics for our business. This role is ideal for engineers who are passionate about working with large-scale data and real-time processing. Responsibilities We Entrust You With Data Pipeline Development : Build and maintain robust ETL/ELT pipelines for batch and streaming data using tools like Apache Spark, Apache Flink, or AWS Glue. Develop real-time ingestion pipelines into Apache Pinot using streaming platforms like Kafka or Kinesis. Real-Time Analytics Configure and optimize Apache Pinot clusters for sub-second query performance and high availability. Design indexing strategies and schema structures to support real-time and historical data use cases. Cloud Infrastructure Management Work extensively with AWS services such as S3, Redshift, Kinesis, Lambda, DynamoDB, and CloudFormation to create scalable, cost-effective solutions. Implement infrastructure as code (IaC) using tools like Terraform or AWS CDK. Performance Optimization Optimize data pipelines and queries to handle high throughput and large-scale data efficiently. Monitor and tune Apache Pinot and AWS components to achieve peak performance. Data Governance & Security Ensure data integrity, security, and compliance with organizational and regulatory standards (e.g., GDPR, SOC2). Implement data lineage, access controls, and auditing mechanisms. Collaboration Work closely with data scientists, analysts, and other engineers to translate business requirements into technical solutions. Collaborate in an Agile environment, participating in sprints, standups, and retrospectives. Relevant Work Experience 4-12 years of hands-on experience in data engineering or related roles. Proven expertise with AWS services and real-time analytics platforms like Apache Pinot or similar technologies (e.g., Druid, ClickHouse). Proficiency in Python, Java, or Scala for data processing and pipeline development. Strong SQL skills and experience with both relational and NoSQL databases. Hands-on experience with streaming platforms such as Apache Kafka or AWS Kinesis. Familiarity with big data tools like Apache Spark, Flink, or Airflow. Strong problem-solving skills and a proactive approach to challenges. Excellent communication and collaboration abilities in cross-functional teams. Preferred Qualifications Experience with data lakehouse architectures (e.g., Delta Lake, Iceberg). Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to monitoring tools like Prometheus, Grafana, or CloudWatch. Familiarity with data visualization tools like Tableau or Superset. What We Offer Competitive compensation based on experience. Flexible work environment with opportunities for growth. Work on cutting-edge technologies and projects in data engineering and analytics. What We Value In Our People You take the shot : You Decide Fast and You Deliver Right You are the CEO of what you do: you show ownership and make things happen You own tomorrow : by building solutions for the merchants and doing the right thing You sign your work like an artist: You seek to learn and take pride in the work you do (ref:hirist.tech)
Posted 5 days ago
3.0 years
0 Lacs
Guwahati, Assam, India
On-site
Job Title: Senior Software Engineer - Backend Location: Guwahati Experience: 3-4+ years Education: BE/B.Tech or higher in Computer Science or a related field About Vantage Circle Vantage Circle is a leading SaaS platform offering AI-powered employee engagement solutions to top organizations worldwide. We’re growing fast and looking for passionate technologists to help shape scalable backend services that power our products. Role Overview We are seeking a skilled Senior Software Engineer (Backend) with a strong foundation in building high-performance, scalable backend systems. You will play a key role in designing, developing, and deploying critical backend components while mentoring team members and driving technical excellence. Key Responsibilities Technical Excellence: Design and develop robust, scalable backend systems and APIs, delivering high-quality, well-tested code aligned with industry best practices. Architectural Contributions: Take ownership of complex backend architecture and systems design; contribute to technology roadmaps that support business objectives. Project Leadership: Lead end-to-end development of critical features and services with minimal supervision, ensuring timely delivery. Mentorship & Coaching: Support junior and mid-level engineers through code reviews, pair programming, and knowledge sharing to elevate overall team performance. Cross-Functional Collaboration: Work closely with product managers, designers, frontend engineers, and DevOps to build cohesive and impactful features. Problem Solving & Innovation: Proactively identify bottlenecks and architectural challenges, and propose/implement innovative solutions to enhance system performance and maintainability. Preferred Tech Stack Programming Languages: Scala (preferred), Java Frameworks: Play Framework or similar Java-based frameworks Databases: MySQL, MongoDB Caching/Data Stores: Redis Tools: Git, Jenkins CI/CD, Docker (bonus) What We’re Looking For Strong understanding of object-oriented and functional programming paradigms Experience in designing RESTful APIs and building scalable microservices Good understanding of relational and NoSQL databases Familiarity with performance tuning and distributed system design Ability to thrive in a fast-paced, collaborative, agile environment Passion for clean code, testing, and continuous improvement
Posted 5 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Backend Developer at our company, you will utilize your expertise in Go, Java, or Scala to contribute to the development of cutting-edge AI/ML platforms. Your proficiency with Google Cloud Platform (GCP) will be instrumental in designing and developing microservices on Kubernetes. Additionally, your experience with frontend technologies such as TypeScript and JavaScript frameworks like React will play a crucial role in creating scalable and secure enterprise applications. You will have the opportunity to dive deep into technology and complex distributed systems, showcasing your ability to tackle challenging projects with precision. Your role will involve collaborating with a diverse team of talented individuals in a collaborative environment where innovation thrives. Whether working locally or globally, you will have the chance to expand your skill set and work on exciting projects across various industries such as High-Tech, communication, media, healthcare, retail, and telecom. At our company, we prioritize work-life balance by offering flexible work schedules, opportunities for remote work, and generous paid time off and holidays. Additionally, we provide a range of professional development opportunities, including communication skills training, stress management programs, professional certifications, and technical and soft skill trainings to support your growth. We value our employees and offer competitive salaries, family medical insurance, various insurance benefits, retirement savings options, extended maternity leave, performance bonuses, and referral bonuses. In addition to these benefits, we provide a fun and vibrant work environment with sports events, cultural activities, subsidized food options, corporate parties, and discounts at popular stores and restaurants. Joining GlobalLogic means becoming part of a leading digital engineering company that collaborates with global brands to create innovative products and digital experiences. With a focus on experience design, complex engineering, and data expertise, we help our clients envision the future and drive their digital transformation across multiple industries worldwide. Headquartered in Silicon Valley and operating globally, GlobalLogic is a Hitachi Group Company known for its commitment to innovation and sustainability through data and technology. As part of our team, you will be at the forefront of shaping tomorrow's digital businesses and contributing to a more sustainable society with a higher quality of life.,
Posted 5 days ago
7.0 - 11.0 years
0 Lacs
hyderabad, telangana
On-site
As part of our team at the Maps division, you will have the opportunity to contribute to the development of tools that analyze, visualize, process, manage, and curate data on a large scale. Our focus is on combining various signals like data analytics, community engagement, and user feedback to enhance the Maps Platform. Your responsibilities may include tasks such as analyzing extensive data sets to identify map errors, devising and executing sophisticated algorithms to address these issues, collaborating with a team of engineers and analysts to review solutions, and integrating the final resolution into the data processing pipeline. We are seeking engineers who can actively participate in constructing a vast, scalable, distributed system that powers the maps data platform. Successful candidates will possess outstanding engineering skills, effective communication abilities, and a belief that feedback driven by data leads to the development of exceptional products. If you are interested in joining our team, here are the key requirements we are looking for in potential candidates: - Minimum of 7 years of experience in a software product development role, focusing on building modern and scalable big data applications. - Advanced expertise in Scala functional programming, with an intermediate to advanced level proficiency being a prerequisite. - Proficiency in implementing data structures and algorithms in Scala. - Proficiency in Hadoop technologies related to handling big data processing, such as Spark. - Experience in developing RESTful web services using Scala, Python, or similar programming languages. In addition to the above requirements, familiarity with Geo Spatial concepts would be considered a plus. At our workplace, we value flexibility, work-life balance, continuous learning, and leadership mentoring. With a hybrid work policy in place, we believe that providing flexibility will enable our teams to achieve their best performance. We prioritize work-life balance by incorporating fun activities, games, events, and outings to ensure that every day at work is engaging and enjoyable. If you are passionate about building innovative solutions and enjoy working in a dynamic environment, we invite you to celebrate work with us at our Hyderabad location.,
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You should be proficient in Apache Spark and PySpark, with a strong understanding of Spark SQL, DataFrames, and RDD optimization techniques. Your programming skills in Python should be solid, and familiarity with languages like Scala is a plus. Experience with cloud platforms, particularly AWS (e.g., EMR, S3, Lambda), is essential. Additionally, having an understanding of DocumentDB, Aurora postgre, and distributed computing environments will be beneficial. Your key skills for this role should include expertise in Spark, Scala, PySpark, Spark SQL, Python, and AWS.,
Posted 5 days ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Skills: Java, Spring Boot, SQL, microservices, coding, messaging queue, 6+ years hands-on experience in Java. Experience in building Order and Execution Management, Trading systems is required Financial experience and exposure to Trading In depth understanding of concurrent programming and experience in designing high throughput, high availability, fault tolerant distributed applications is required. Experience in building distributed applications using NoSQL technologies like Cassandra, coordination services like Zookeeper, and caching technologies like Apache Ignite and Redis strongly preferred Experience in building micro services architecture / SOA is required. Experience in message-oriented streaming middleware architecture is preferred (Kafka, MQ, NATS, AMPS) Experience with orchestration, containerization, and building cloud native applications (AWS, Azure) is a plus Experience with modern web technology such as Angular, React, TypeScript a plus Strong analytical and software architecture design skills with an emphasis on test driven development. Experience in programming languages such as Scala, python would be a plus. Experience in using Project Management methodologies such as Agile/Scrum Effective communication and presentation skills (written and verbal) are required Bachelors or masters degree in computer science or engineering Good Communication skills
Posted 5 days ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY's Advisory Services is a unique, industry-focused business unit that provides a broad range of integrated services leveraging deep industry experience with strong functional and technical capabilities and product knowledge. The financial services practice at EY offers integrated advisory services to financial institutions and other capital markets participants. Within EY's Advisory Practice, the Data and Analytics team solves big, complex issues and capitalizes on opportunities to deliver better working outcomes that help expand and safeguard businesses, now and in the future. This way, we help create a compelling business case for embedding the right analytical practice at the heart of clients" decision-making. We're looking for Senior and Manager Big Data Experts with expertise in the Financial Services domain and hands-on experience with the Big Data ecosystem. Expertise in Data engineering, including design and development of big data platforms. Deep understanding of modern data processing technology stacks such as Spark, HBase, and other Hadoop ecosystem technologies. Development using SCALA is a plus. Deep understanding of streaming data architectures and technologies for real-time and low-latency data processing. Experience with agile development methods, including core values, guiding principles, and key agile practices. Understanding of the theory and application of Continuous Integration/Delivery. Experience with NoSQL technologies and a passion for software craftsmanship. Experience in the Financial industry is a plus. Nice to have skills include understanding and familiarity with all Hadoop Ecosystem components, Hadoop Administrative Fundamentals, experience working with NoSQL in data stores like HBase, Cassandra, MongoDB, HDFS, Hive, Impala, schedulers like Airflow, Nifi, experience in Hadoop clustering, and Auto scaling. Developing standardized practices for delivering new products and capabilities using Big Data technologies, including data acquisition, transformation, and analysis. Defining and developing client-specific best practices around data management within a Hadoop environment on Azure cloud. To qualify for the role, you must have a BE/BTech/MCA/MBA degree, a minimum of 3 years hands-on experience in one or more relevant areas, and a total of 6-10 years of industry experience. Ideally, you'll also have experience in Banking and Capital Markets domains. Skills and attributes for success include using an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates, strong communication, presentation and team building skills, experience in producing high-quality reports, papers, and presentations, and experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. A Team of people with commercial acumen, technical experience, and enthusiasm to learn new things in this fast-moving environment, an opportunity to be a part of a market-leading, multi-disciplinary team of 1400+ professionals, in the only integrated global transaction business worldwide, and opportunities to work with EY Advisory practices globally with leading businesses across a range of industries. Working at EY offers inspiring and meaningful projects, education and coaching alongside practical experience for personal development, support, coaching, and feedback from engaging colleagues, opportunities to develop new skills and progress your career, freedom and flexibility to handle your role in a way that's right for you. EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,
Posted 5 days ago
2.0 - 6.0 years
0 Lacs
kochi, kerala
On-site
At EY, you have the opportunity to build a career tailored to your uniqueness, with the global scale, support, inclusive culture, and technology to help you become the best version of yourself. Your unique voice and perspective are essential in contributing to EY's continuous improvement. Join us in creating an exceptional experience for yourself while working towards a better working world for all. EY Technology recognizes that technology is crucial in unlocking our clients" potential and delivering lasting value through innovation. We are dedicated to building a better working world by equipping EY and our clients with the necessary products, services, support, and insights to succeed in the market. Your role at EYTS involves implementing data integration and reporting solutions using ETL technology offerings. You will be responsible for converting business and technical requirements into appropriate technical solutions, leveraging tools such as Azure Data Factory, Databricks, and Azure Data Lake Store. Additionally, you will create data integration features using Azure Data Factory, Azure Data Bricks, and Scala/PySpark Notebooks, along with setting up and maintaining Azure PaaS SQL databases and database objects, and Azure BLOB Storage. Your ability to develop complex queries, take ownership of project tasks, ensure effective communication within the team, and deliver high-quality results within project timelines are crucial. To excel in this role, you must hold a B.E/ B.Tech/ MCA/ MS or equivalent degree in Computer Science discipline, with a minimum of 2-5 years of experience as a software developer. Hands-on experience in developing data integration routines using various Azure technologies, accountability for quality technical deliverables, strong interpersonal skills, and the ability to work independently and collaboratively are essential. Additionally, being extremely organized, adaptable to change, and a quick learner with a can-do attitude are valued qualities. Ideally, you will have experience in developing end-to-end data integration and reporting solutions using Azure Services and Power Platform, creating PowerBI dashboards and reports, and working with PMI & Agile Standards. Industry-recognized certifications in Azure offerings would be a plus. As an ETL developer at EY, you will play a key role in converting product designs into functioning components by adhering to architectural standards and applying judgment in implementing Application Engineering methodologies. Your work will contribute to the success of EY's growth strategy and offer fulfilling career opportunities that span various business disciplines. EY Global Delivery Services (GDS) offers a dynamic and truly global delivery network where you will collaborate with diverse teams on exciting projects and work with well-known brands worldwide. Continuous learning, success defined by you, transformative leadership, and a diverse and inclusive culture are some of the benefits of working at EY. If you meet the criteria mentioned above, we encourage you to reach out to us at your earliest convenience. Join us at EY in building a better working world and creating long-term value for clients, people, and society while fostering trust in the capital markets through data and technology-enabled solutions.,
Posted 5 days ago
6.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
We are looking for an experienced and skilled Azure Data Engineer to join our team at Creant for a contract-based position in Pune. As an Azure Data Engineer, you will be responsible for designing, developing, and implementing data analytics and data warehouse solutions using Azure Data Platform. You will collaborate closely with business stakeholders, data architects, and technical teams to ensure efficient data integration, transformation, and availability. Your key responsibilities will include designing, developing, and implementing data warehouse and data analytics solutions leveraging Azure Data Platform. You will create and manage data pipelines using Azure Data Factory (ADF) and Azure Data Bricks, and work extensively with Azure AppInsights, Dataverse, and PowerCAT Tools to ensure efficient data processing and integration. Additionally, you will implement and manage data storage solutions using Azure SQL Database and other Azure data services. Designing and developing Logic Apps, Azure Function Apps for data processing, orchestration, and automation will also be part of your role. You will collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions. Performing data validation, quality checks, and ensuring data consistency across systems will be essential. You will also be responsible for monitoring, troubleshooting, and optimizing data solutions for performance, scalability, and security, as well as preparing technical documentation and supporting project handover to operations teams. The primary skills required for this role include: - Strong experience as a Data Engineer with 6 to 10 years of relevant experience. - Expertise in Azure Data Engineering services such as Azure AppInsights, Dataverse, PowerCAT Tools, Azure Data Factory (ADF), Azure Data Bricks, Azure SQL Database, Azure Function Apps, and Azure Logic Apps. - Proficiency in ETL/ELT processes, data integration, and data migration. - Solid understanding of Data Warehouse Architecture and data modeling principles. - Experience in working on large-scale data platforms and handling complex data workflows. - Familiarity with Azure Analytics Services and related data tools. - Strong knowledge of SQL, Python, or Scala for data manipulation and processing. Preferred skills for this role include knowledge of Azure Synapse Analytics, Cosmos DB, and Azure Monitor, a good understanding of data governance, security, and compliance aspects, as well as strong problem-solving, troubleshooting, communication, and stakeholder management skills.,
Posted 5 days ago
2.0 - 6.0 years
0 Lacs
kanpur, uttar pradesh
On-site
ITH Tech in Kanpur is actively involved in business strategy, consulting, blockchain development, and neural networks enabled Brain-computer Interfaces. They are dedicated to building solutions and products based on cutting-edge technologies and are looking for skilled individuals with a passion for emerging technologies. Diversity is embraced, and the company seeks professionals who share the same values. As a blockchain developer at ITH Tech, you will collaborate with a team of developers and engineers to create exceptional blockchain applications. A profound understanding of blockchain technology is crucial, along with expertise in computer networking, cryptography, algorithms, and data structures. ITH Tech is an early adopter of blockchain technology and is seeking professionals with excellent coding skills and a drive to develop innovative and decentralized solutions. Responsibilities: - Develop innovative solutions utilizing Blockchain technologies. - Define architecture and best practices for Blockchain technology adoption and implementation. - Share best practices and provide expertise in solving Blockchain engineering issues. - Write high-quality code to meet project requirements. - Support the entire development lifecycle from concept to release. - Collaborate with internal teams to determine system requirements. Requirements: - Hands-on experience developing proofs-of-concepts and pilots in at least one of the blockchain platforms like Ethereum, Hyperledger, or Multi-chain. - Proficiency in languages such as Java, Golang, Scala, Haskell, Erlang, Python, C, C++, C#, etc. - Experience with open-source tools and technologies. - Deep understanding of Bitcoin and other cryptocurrencies. - Familiarity with various distributed consensus methodologies (Mining, PoS, etc). - Strong grasp of cryptography, including asymmetric and symmetric encryption, hash functions, and encryption/signatures. - Knowledge of versioning systems like Git, Bitbucket, etc. - Excellent teamwork and communication skills. - Passion for best design and coding practices and a drive to innovate. Education & Experience: - B.Tech/MCA/M.tech in a relevant field. - Minimum 2 years of experience in blockchain development. Preferred Technology Skills: - Microsoft SQL Server, Visual Studio, .NET, MVC, AJAX, SQL, C, C++, C#, Javascript, Node.js, JQuery, SOAP, REST, FTP, HTML, XML, XSLT, XCOD, Neural-networks, Regression, Agile Scrum, MYSQL. Desired Qualifications: - Excellent programming skills, knowledge of blockchain protocols, strong analytical abilities, and effective communication skills. - Ability to develop full life cycles of blockchain applications, from research to execution. - Proficiency in setting up robust firewalls for enhanced blockchain ecosystem security. - Sharp analytical skills and a passion for innovation.,
Posted 5 days ago
3.0 - 7.0 years
0 - 0 Lacs
hyderabad, telangana
On-site
You should have expertise in database testing and be able to write complex SQL queries. Additionally, you should be proficient in at least one programming language such as Python, Java, or Scala. You must also have expertise in any of the Automation tools like Robot Framework, Selenium Web Driver, or Unified Functional Test (UFT). Your role will involve hands-on experience in system, integration, and regression testing, as well as experience in developing automated solutions for integration and regression testing. A basic understanding of Unix is required, along with basic knowledge around Databricks and Spark concepts. It would be beneficial to have a fairly good knowledge of the Financial Services domain, ideally in Market Risk. The rate for this position is 7500 to 8000 INR per day, and you should be ready to work from the Pune ODC location on a Hybrid model.,
Posted 5 days ago
7.0 - 10.0 years
14 - 18 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Job Summary: We are seeking a highly skilled and experienced Scala Developer with strong hands-on expertise in functional programming , RESTful API development , and building scalable microservices . The ideal candidate will have experience with Play Framework , Akka , or Lagom , and be comfortable working with both SQL and NoSQL databases in cloud-native, Agile environments. Key Responsibilities: Design, develop, and deploy scalable backend services and APIs using Scala . Build and maintain microservices using Play Framework , Akka , or Lagom . Develop RESTful APIs and integrate with internal/external services. Handle asynchronous programming , stream processing , and ensure efficient concurrency. Optimize and refactor code for better performance, readability, and scalability. Collaborate with cross-functional teams including Product, UI/UX, DevOps, and QA. Work with databases such as PostgreSQL , MySQL , Cassandra , or MongoDB . Participate in code reviews , documentation, and mentoring team members. Build and manage CI/CD pipelines using Docker , Git , and relevant DevOps tools. Follow Agile/Scrum practices and contribute to sprint planning and retrospectives. Must-Have Skills: Strong expertise in Scala and functional programming principles. Experience with Play Framework , Akka , or Lagom . Deep understanding of RESTful APIs , Microservices Architecture , and API integration . Proficiency with concurrency , asynchronous programming , and stream processing . Hands-on experience with SQL/NoSQL databases (PostgreSQL, MySQL, Cassandra, MongoDB). Familiarity with SBT or Maven as build tools. Experience with Git , Docker , and CI/CD workflows. Comfortable working in Agile/Scrum environments. Good to Have: Experience with data processing frameworks like Apache Spark . Exposure to cloud environments (AWS, GCP, or Azure). Strong debugging, troubleshooting, and analytical skills. Educational Qualification: Bachelors degree in Computer Science, Engineering, or a related field. Why Join Us? Opportunity to work on modern, high-impact backend systems. Collaborative and learning-driven environment. Be a part of a growing technology team building solutions at scale.
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Engineer, you will be responsible for designing, building, and maintaining data pipelines on the Microsoft Azure cloud platform. Your primary focus will be on utilizing technologies such as Azure Data Factory, Azure Synapse Analytics, PySpark, and Python to handle complex data processing tasks efficiently. Your key responsibilities will include designing and implementing data pipelines using Azure Data Factory or other orchestration tools, writing SQL queries for ETL processes, and collaborating with data analysts to meet data requirements and ensure data quality. You will also need to implement data governance practices for security and compliance, monitor and optimize data pipelines for performance, and develop unit tests for code. Working in an Agile environment, you will be part of a team that develops Modern Data Warehouse solutions using Azure Stack, coding in Spark (Scala or Python) and T-SQL. Proficiency in source code control systems like GIT, designing solutions with Azure data services, and managing team governance are essential aspects of this role. Additionally, you will provide technical leadership, guidance, and support to team members, resolve blockers, and report progress to customers regularly. Preferred skills and experience for this role include a good understanding of PySpark and Python, proficiency in Azure Data Engineering tools (Azure Data Factory, DataBricks, Synapse Analytics), experience in handling large datasets, exposure to DevOps basics, and knowledge of Release Engineering fundamentals.,
Posted 5 days ago
6.0 - 10.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
You are an experienced Data Engineer with at least 6 years of relevant experience. In this role, you will be working as part of a team to develop Data and Analytics solutions. Your responsibilities will include participating in the development of cloud data warehouses, data as a service, and business intelligence solutions. You should be able to provide forward-thinking solutions in data integration and ensure the delivery of a quality product. Experience in developing Modern Data Warehouse solutions using Azure or AWS Stack is required. To be successful in this role, you should have a Bachelor's degree in computer science & engineering or equivalent demonstrable experience. It is desirable to have Cloud Certifications in Data, Analytics, or Ops/Architect space. Your primary skills should include: - 6+ years of experience as a Data Engineer, with a key/lead role in implementing large data solutions - Programming experience in Scala or Python, SQL - Minimum of 1 year of experience in MDM/PIM Solution Implementation with tools like Ataccama, Syndigo, Informatica - Minimum of 2 years of experience in Data Engineering Pipelines, Solutions implementation in Snowflake - Minimum of 2 years of experience in Data Engineering Pipelines, Solutions implementation in Databricks - Working knowledge of some AWS and Azure Services like S3, ADLS Gen2, AWS Redshift, AWS Glue, Azure Data Factory, Azure Synapse - Demonstrated analytical and problem-solving skills - Excellent written and verbal communication skills in English Your secondary skills should include familiarity with Agile Practices, Version control platforms like GIT, CodeCommit, problem-solving skills, ownership mentality, and a proactive approach rather than reactive. This is a permanent position based in Trivandrum/Bangalore. If you meet the requirements and are looking for a challenging opportunity in the field of Data Engineering, we encourage you to apply before the close date on 11-10-2024.,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
The Data Analytics Intmd Analyst position is ideal for a developing professional who can independently handle most problems and has the flexibility to solve complex issues. By merging in-depth specialty area knowledge with a strong understanding of industry standards and practices, you will contribute towards achieving the objectives of the subfunction/job family. Your role will involve applying analytical thinking and utilizing data analysis tools and methodologies to make informed judgments and recommendations based on factual information. Your responsibilities will include integrating in-depth data analysis knowledge with industry standards, understanding how data analytics teams collaborate with others to achieve objectives, applying project management skills, and utilizing analytical thinking for accurate judgments and recommendations. You will also be expected to break down information systematically, communicate effectively, and ensure the quality and timeliness of service provided by the team. As an Intmd Analyst, you will be required to provide informal guidance or on-the-job training to new team members, assess risks when making business decisions, and prioritize the firm's reputation and compliance with laws and regulations. Your expertise in Hadoop, Python, Spark, Hive, RDBMS, and Scala, along with knowledge of statistical modeling tools for large data sets, will be crucial for success in this role. To qualify for this position, you should have at least 5 years of relevant experience, strong expertise in the mentioned technologies, and the ability to effectively use complex analytical, interpretive, and problem-solving techniques. Excellent interpersonal, verbal, and written communication skills are also essential. A Bachelor's/University degree or equivalent experience is required for this role. This job description provides an overview of the primary duties involved, and additional responsibilities may be assigned as needed. Citi is an equal opportunity and affirmative action employer, offering a full-time position in the Technology job family group, specifically in Data Analytics. If you are a person with a disability and require accommodations to apply for a career opportunity at Citi, please review the Accessibility at Citi guidelines.,
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
punjab
On-site
You are a global climate technologies company engineered for sustainability, creating sustainable and efficient residential, commercial, and industrial spaces through HVACR technologies. Your focus includes protecting temperature-sensitive goods throughout the cold chain and providing comfort worldwide. By combining best-in-class engineering, design, and manufacturing with leading brands in compression, controls, software, and monitoring solutions, you develop next-generation climate technology tailored for future needs. Whether you are a professional seeking a career change, an undergraduate student exploring opportunities, or a recent graduate with an advanced degree, numerous opportunities await you to innovate, be challenged, and make a significant impact by joining the team and embarking on your journey today. In the realm of Software Development, you will be responsible for developing code and solutions that facilitate the transfer and transformation of data across various systems. Maintaining a deep technical knowledge of tools in the data warehouse, data hub, and analytical tools is crucial. Ensuring efficient data transformation and storage for retrieval and usage, as well as optimizing data systems" performance, are key tasks. Moreover, developing a profound understanding of underlying business systems related to analytical systems is essential. You will adhere to standard software development lifecycle, code control, code standards, and process standards, continually enhancing your technical knowledge through self-training, educational opportunities, and participation in professional organizations related to your tech skills. In Systems Analysis, your role involves collaborating with key stakeholders to comprehend business needs and capture functional and technical requirements. You will propose ideas to simplify solution designs and communicate expectations to stakeholders and resources during solution delivery. Developing and executing test plans to ensure the successful rollout of solutions, including data accuracy and quality, is part of your responsibilities. Regarding Service Management, effective communication with leaders and stakeholders to address obstacles during solution delivery is imperative. Defining and managing promised delivery dates, proactively researching, analyzing, and predicting operational issues, and offering viable options to resolve unexpected challenges during solution development and delivery are essential aspects of your role. Your education and job-related technical skills include a Bachelor's Degree in Computer Science/Information Technology or equivalent. You possess the ability to communicate effectively with individuals at all levels verbally and in writing, demonstrating a courteous, tactful, and professional approach. Working in a large, global corporate structure, having an advanced English level (additional language proficiency is advantageous), a strong sense of ethics and adherence to the company's core values, and willingness to travel domestically and internationally to support global implementations are required. You demonstrate the capability to clearly identify and define problems, assess alternative solutions, and make timely decisions. Your decision-making ability, operational efficiency in ambiguous situations, high analytical skills to evaluate approaches against objectives, and a minimum of three years of experience in a Data Engineer role with expertise in specific tools and technologies are essential. Your behavior and soft skills encompass proficiency in written technical concepts, leading problem-solving teams, conflict resolution efficiency, collaboration in cross-functional projects, and driving process mapping sessions. Additionally, the Korn Ferry Competencies you embody include customer focus, building networks, instilling trust, being tech-savvy, demonstrating interpersonal savvy, self-awareness, taking action, collaborating, and being a nimble learner. The company's commitment to its people is evident in its dedication to sustainability, reducing carbon emissions, and improving energy efficiency through groundbreaking innovations, HVACR technology, and cold chain solutions. The culture of passion, openness, and collaboration empowers employees to work towards a common goal of making the world a better place. Investing in the comprehensive development of individuals ensures personal and professional growth from onboarding through senior leadership. Flexible and competitive benefits plans cater to individual and family needs, offering various options for time off, including paid parental leave, vacation, and holiday leave. The commitment to Diversity, Equity & Inclusion at Copeland emphasizes the creation of a diverse, equitable, and inclusive environment essential for organizational success. A culture where every employee is welcomed, heard, respected, and valued for their experiences, ideas, perspectives, and expertise is fostered. Embracing diversity and inclusion drives innovation, enhances customer service, and creates a positive impact in the communities where the company operates. Copeland is an Equal Opportunity Employer, fostering an inclusive workplace where all individuals are valued and respected for their contributions and unique qualities.,
Posted 5 days ago
5.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
You are an experienced Azure Databricks Engineer who will be responsible for designing, developing, and maintaining scalable data pipelines and supporting data infrastructure in an Azure cloud environment. Your key responsibilities will include designing ETL pipelines using Azure Databricks, building robust data architectures on Azure, collaborating with stakeholders to define data requirements, optimizing data pipelines for performance and reliability, implementing data transformations and cleansing processes, managing Databricks clusters, and leveraging Azure services for data orchestration and storage. You must possess 5-10 years of experience in data engineering or a related field with extensive hands-on experience in Azure Databricks and Apache Spark. Strong knowledge of Azure cloud services such as Azure Data Lake, Data Factory, Azure SQL, and Azure Synapse Analytics is required. Experience with Python, Scala, or SQL for data manipulation, ETL frameworks, Delta Lake, Parquet formats, Azure DevOps, CI/CD pipelines, big data architecture, and distributed systems is essential. Knowledge of data modeling, performance tuning, and optimization of big data solutions is expected, along with problem-solving skills and the ability to work in a collaborative environment. Preferred qualifications include experience with real-time data streaming tools, Azure certifications, machine learning frameworks, integration with Databricks, and data visualization tools like Power BI. A bachelor's degree in Computer Science, Data Engineering, Information Technology, or a related field is required for this role.,
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough