Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 8.0 years
10 - 20 Lacs
Bengaluru
Hybrid
ERole & responsibilities 5 to 8 Years of experience as a Java developer. Able to code using Spring framework. Proven experience in building and consuming APIs (RESTful and/or SOAP). Knowledge of orchestration frameworks (e.g., Camunda, Apache Airflow). Experience with relational databases (e.g., Db2, MySQL, PostgreSQL, Oracle). Understanding of software development best practices and design patterns. Excellent problem-solving skills and attention to detail. Strong communication and teamwork skills. hands-on experience in software development, particularly in n-tier architecture which span across multiple platforms - Strong proficiency in Core JAVA and understanding OOP - Good knowledge on common Java frameworks and technologies, including Spring Framework, Hibernate / JPA. - Experience in Java multiple-threading program is highly preferred - Experience on web services, including knowledge of REST / SOAP / XML/ JSON - Good knowledge and hands on experience with databases (SQL) - Working knowledge on cross platform systems running on Windows and Linux OS - Web UI experience using HTML5, JavaScript or Angular will be a plus - Knowledge and hands on experience in Agile/DevOps will be a plus - Hands On development experience in banking/financial industry will be a plus - Good English read/write skill and proficient oral English communication capability - Good Chinese read/write skill and proficient oral Chinese communication capabilityIT NOTE : Interested Candidates Can share Their Updated CV On Mail id :Tejashwini@srinav.net Phone No: 8197358132
Posted 1 week ago
8.0 - 11.0 years
27 - 42 Lacs
Bengaluru
Work from Office
Role : MLOps Engineer Location - Coimbatore Mode of Interview - In Person Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.
Posted 1 week ago
9.0 - 14.0 years
12 - 22 Lacs
Bengaluru
Remote
We are looking for Data Engineers for Multiple locations. T.E- 9-15 years Locations - Chennai , Mumbai , Pune , Noida , Hyderabad & Bangalore Skills Combination : 1. AWS + Airflow + Python 2. AWS +Airflow + DBT Please share your details at Saudaminic@hexaware.com OR Apply at : https://forms.office.com/Pages/ResponsePage.aspx?id=9TYMfIOvJEyIRJli4BY3GdCP897qAcpBoCWuDZyUhuZUN0s4OUNGOTk4UFIzOFVMM1A4UEkzV0JQRi4u JD : Must have below: 10+ Years Experience Great Communicator/Client Facing Individual Contributor 100% Hands on in the mentioned skills DBT Proficiency: model development: Experience in creating complex DBT models including incremental models, snapshots and documentation. Ability to write and maintain DBT macros for reusable code Testing and documentation: Proficiency in implementing DBT tests for data validation and quality checks Familiarity with generating and maintaining documentation using DBT's built in features Version control: Experience in managing DBT projects using git ,including implementing CI/CD process from the scratch AWS Expertise: Data STORAGE solutions: In depth understanding of AWS S3 for data storage, including best practices for organization and security Experience with AWS redshift for data warehousing and performance optimization Data Integration: Familiarity with Aws glue for ETL processes and orchestration -Nice to have Experience with AWS lambda for serverless data processing tasks Thanks Saudamini Chauhan Saudaminic@hexaware.com
Posted 1 week ago
2.0 - 4.0 years
2 - 6 Lacs
Hyderabad
Work from Office
Fusion Plus Solutions Inc is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 1 week ago
3.0 - 8.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Tech Stalwart Solution Private Limited is looking for Sr. Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 1 week ago
2.0 - 4.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Locations : Bengaluru | Gurgaon Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. What You'll Do As a part of BCG's X team, you will work closely with consulting teams on a diverse range of advanced analytics and engineering topics. You will have the opportunity to leverage analytical methodologies to deliver value to BCG's Consulting (case) teams and Practice Areas (domain) through providing analytical and engineering subject matter expertise.As a Data Engineer, you will play a crucial role in designing, developing, and maintaining data pipelines, systems, and solutions that empower our clients to make informed business decisions. You will collaborate closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to deliver high-quality data solutions that meet our clients' needs. YOU'RE GOOD AT Delivering original analysis and insights to case teams, typically owning all or part of an analytics module whilst integrating with a case team. Design, develop, and maintain efficient and robust data pipelines for extracting, transforming, and loading data from various sources to data warehouses, data lakes, and other storage solutions. Building data-intensive solutions that are highly available, scalable, reliable, secure, and cost-effective using programming languages like Python and PySpark. Deep knowledge of Big Data querying and analysis tools, such as PySpark, Hive, Snowflake and Databricks. Broad expertise in at least one Cloud platform like AWS/GCP/Azure.* Working knowledge of automation and deployment tools such as Airflow, Jenkins, GitHub Actions, etc., as well as infrastructure-as-code technologies like Terraform and CloudFormation. Good understanding of DevOps, CI/CD pipelines, orchestration, and containerization tools like Docker and Kubernetes. Basic understanding on Machine Learning methodologies and pipelines. Communicating analytical insights through sophisticated synthesis and packaging of results (including PPT slides and charts) with consultants, collecting, synthesizing, analyzing case team learning & inputs into new best practices and methodologies. Communication Skills Strong communication skills, enabling effective collaboration with both technical and non-technical team members. Thinking Analytically You should be strong in analytical solutioning with hands on experience in advanced analytics delivery, through the entire life cycle of analytics. Strong analytics skills with the ability to develop and codify knowledge and provide analytical advice where required. What You'll Bring Bachelor's / Master's degree in computer science engineering/technology At least 2-4 years within relevant domain of Data Engineering across industries and work experience providing analytics solutions in a commercial setting. Consulting experience will be considered a plus. Proficient understanding of distributed computing principles including management of Spark clusters, with all included services - various implementations of Spark preferred. Basic hands-on experience with Data engineering tasks like productizing data pipelines, building CI/CD pipeline, code orchestration using tools like Airflow, DevOps etc.Good to have:- Software engineering concepts and best practices, like API design and development, testing frameworks, packaging etc. Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge on web development technologies. Understanding of different stages of machine learning system design and development #BCGXjob Who You'll Work With You will work with the case team and/or client technical POCs and border X team. Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
FairMoney is a pioneering mobile banking institution specializing in extending credit to emerging markets. Established in 2017, the company currently operates primarily within Nigeria, and it has secured nearly €50 million in funding from renowned global investors, including Tiger Global, DST, and Flourish Ventures. In alignment with its vision, FairMoney is actively constructing the foremost mobile banking platform and point-of-sale (POS) solution tailored for emerging markets. The journey began with the introduction of a digital microcredit application exclusively available on Android and iOS devices. Today, FairMoney has significantly expanded its range of services, encompassing a comprehensive suite of financial products, such as current accounts, savings accounts, debit cards, and state-of-the-art POS solutions designed to meet the needs of both merchants and agents. FairMoney thrives on its diverse workforce, bringing together talent from over 27 nationalities. This multicultural team drives the company's mission of reshaping financial services for underserved communities.To gain deeper insights into FairMoney's pivotal role in reshaping Africa's financial landscape, we invite you to watch informative video. Job Summary: Your mission is to develop data science-driven algorithms and applications to improve decisions in business processes like risk and debt collection, offering the best-tailored credit services to as many clients as possible. Requirements Strong background in Mathematics / Statistics / Econometrics / Computer science or related field 5+ years of work experience in analytics, data mining, and predictive data modelling, preferably in the fintech domain Being best friends with Python and SQL Hands-on experience in handling large volumes of tabular data Strong analytical skills: ability to make sense out of a variety of data and its relation/applicability to a specific business problem Feeling confident working with key Machine learning algorithms (GBM, XG-Boost, Random Forest, Logistic regression) Being at home building and deploying models around credit risk, debt collection, fraud, and growth . Track record of designing, executing and interpreting A/B tests in business environment . Strong focus on business impact and experience driving it end-to-end using data science applications . Strong communication skills Being passionate about all things data. Our tool stack Programming language: Python Production: Python API deployed on Amazon EKS (Docker, Kubernetes, Flask) ML: Scikit-Learn, LightGBM, XGBoost, shap ETL: Python, Apache Airflow Cloud: AWS, GCP Database: MySQL DWH: BigQuery, Snowflake BI: Tableau, Metabase, dbt Streaming Applications: Flink, Kinesis Role and Responsibilities Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions Mine and analyze data from company databases and external data sources to drive optimization and improvement of risk strategies, product development, marketing techniques, and other business decisions Assess the effectiveness and accuracy of new data sources and data gathering techniques Use predictive modelling to increase and optimize customer experiences, revenue generation, and other business outcomes Coordinate with different functional teams to make the best use of developed data science applications Develop processes and tools to monitor and analyze model performance and data quality Apply advanced statistical and data mining techniques in order to derive patterns from the data Own data science projects end-to-end and proactively drive improvements in both data Benefits Paid Time Off (25 days Vacation, Sick & Public Holidays) Family Leave (Maternity, Paternity) Training & Development budget Paid company business trips (not mandatory) Remote work Recruitment Process Screening call with Senior Recruiter Home Test assignment Technical interview Interview with the team and key stakeholders Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Streaming data Technical skills requirements :- Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred) - Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR - Hands-on experience of programming language like Scala with Spark. - Good command and working experience on Hadoop Map Reduce, HDFS, Hive, HBase, and/or No-SQL Databases - Hands on working experience on any of the data engineering analytics platform (Hortonworks Cloudera MapR AWS), AWS preferred - Hands-on experience on Data Ingestion Apache Nifi, Apache Airflow, Sqoop, and Oozie - Hands on working experience of data processing at scale with event driven systems, message queues (Kafka Flink Spark Streaming) Hands-on development experience on above technologies - Data Warehouse exposure on Apache Nifi, Apache Airflow, Kylo - Operationalization of ML models on AWS (e.g. deployment, scheduling, model monitoring etc.) - Feature Engineering Data Processing to be used for Model development - Experience gathering and processing raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.) - Experience building data pipelines for structured unstructured, real-time batch, events synchronous asynchronous using MQ, Kafka, Steam processing - Hands-on working experience in analyzing source system data and data flows, working with structured and unstructured data - Must be very strong in writing SQL queries. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. What You'll Do Be part of a team that is transforming BCG into a bionic company! We are in the early stages of building a centralized Business Intelligence & Analytics function that will simplify and automate information delivery—providing advanced insights and analysis to support decision making. To date, the team has launched and operationalized several global scale products and dashboards, enhancing how our leaders engage in information to manage the business. The next wave of digital reporting is underway which will help to unlock further value for BCG’s leadership and functions with best-in-class business intelligence and analytics. The Data Visualization Analyst will work as an integral part of an Agile team. You will be responsible for developing, enhancing and maintaining a suite of dashboard products that will be leveraged globally by our business leaders and executive teams. Working as part of an Agile team, this role will interact with the business to understand use cases, create prototypes, iterate on the design and launch of digital reporting and analytic products. What You'll Bring 3–5+ years of experience developing with Tableau (Certification preferred: Qualified Associate or Certified Professional). Proficient in dashboard/report design, development, and support in a business context. Experience with legacy report profiling and building modern replacements with enhanced user experiences and insights. Familiarity with Tableau Server and database-level security implementations. Practical experience using generative AI tools (ex: ChatGPT) to automate repetitive tasks, streamline analysis workflows, or improve productivity. Familiarity with components of the modern data stack such as Snowflake, dbt, or Airflow, and a willingness to adapt as data infrastructure evolves. Experience building or contributing to lightweight data applications (ex: with Streamlit or JavaScript/React-based interfaces) to enhance business decision-making. Strong Proficiency in SQL and Python; experience with data prep tools like Alteryx and Tableau Prep. Exposure to other BI tools such as Power BI, Sigma, or Looker. Basic finance knowledge (P&L, Balance Sheet, etc.) preferred. Experience working within Agile development environments and ceremonies. Who You'll Work With As a member of the team, you will interact extensively with a diverse range of stakeholders from across the business, both geographically and functionally. The role sits within the overall Global Enterprise Service Team, coordinating and working with our Analysis, Planning, and Reporting teams will play a large part in this role. Additional info YOU’RE GOOD AT Business And Analytic Skills Developing insightful, visually compelling, and engaging dashboards that support decision making Rapidly explore, transform, and synthesize data from multiple sources to identify optimal data structures for reporting. Continuously improve reporting products to ensure performance, scalability, and security. Work collaboratively in a fast-paced agile environment with both technical and non-technical stakeholders. Maintain a customer-focused approach by deeply understanding user needs and feedback. Communicate clearly and transparently across all levels of the organization. Seek opportunities to innovate and improve processes or tools for faster and more effective outcomes. Communication, Interpersonal And Teaming Skills Communicates proactively and clearly with stakeholders across all levels and geographies; keeps partners informed and engaged throughout the project lifecycle. Demonstrates strong ownership of projects — from planning through execution — and actively drives work forward rather than waiting for direction. Collaborates positively and builds strong, trust-based relationships within and across teams, including in multi-time-zone environments. Challenges assumptions constructively and asks thoughtful questions to deepen understanding and improve business outcomes. Navigates changing priorities and diverse audiences with adaptability, tact, and professionalism. Shows persistence and resilience in advancing ideas, solving problems, and delivering results. Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify. Show more Show less
Posted 2 weeks ago
3.0 - 5.0 years
4 - 8 Lacs
Hyderabad
Work from Office
THE TEAM This opportunity is to join a team of highly technically skilled engineers, who are redesigning Creditsafe data platform with high throughput and scalability as primary goal. The data delivery platform is being built upon AWS redshift and S3 cloud storage. The platform expected to manage over billion objects along with daily increment of more than 10 million objects while handling addition, deletion and correction of our data and indexes in auditable manner. Our data processing application is entirely based on Python and designed to efficiently transform incoming raw data volume into API consumable schema. The team is also building high available and low latency APIs to enable our clients with faster data delivery. JOB PROFILE Join us to take the above project of redesigning the Creditsafe platform into the cloud space. You will be expected to work with technologies such as Python, Airflow, Redshift, DynamoDB, AWS Glue, S3. KEY DUTIES AND RESPONSIBILITIES You will actively contribute to the codebase and participate in peer reviews. Design and build metadata driven, event based distributed data processing platform using technologies such as Python, Airflow, Redshift, DynamoDB, AWS Glue, S3. As an experienced Engineer, you will play a critical role in the design, development, and deployment of our business-critical system. You will be building and scaling Creditsafe APIs to securely support over 1000 transactions per second using server less technologies. Execute practices such as continuous integration and test-driven development to enable the rapid delivery of working code. Understanding company domain data to make recommendations to improve existing product. The responsibilities detailed above are not exhaustive and you may be requested to take on additional responsibilities deemed as reasonable by their direct line manager. SKILLS AND QUALIFICATIONS Demonstrate ability to write clean efficient code and knit it together with cloud environment for best performance. Proven experience of development within a commercial environment creating production grade APIs and data pipelines in python. You are looking to grow your skills through daily technical challenges and enjoy problem solving and whiteboarding in collaboration with a team. You have excellent communication skills, and the ability to explain your views clearly to the team and are open to understanding theirs. Have a proven track record to draw from a deep and broad technical expertise to mentor engineers, complete hands-on technical work, and provide leadership on complex technology issues. Share your ideas collaboratively via wikis, discussions boards, etc and share any decisions made, for the benefit of others.
Posted 2 weeks ago
7.0 - 12.0 years
18 - 25 Lacs
Bengaluru
Work from Office
JOB DESCRIPTION Role Expectations: Design, develop, and maintain robust, scalable, and efficient data pipelines Monitor data workflows and systems to ensure reliability and performance Identify and troubleshoot issues related to data flow and database performance Collaborate with cross-functional teams to understand business requirements and translate them into data solutions Continuously optimize existing data processes and architectures. Qualifications: Programming Languages: Proficient in Python and SQL Databases: Strong experience with Amazon Redshift, Aurora, and MySQL Data Engineering: Solid understanding of data warehousing concepts, ETL/ELT processes, and building scalable data pipelines Strong problem-solving and analytical skills Excellent communication and teamwork abilities
Posted 2 weeks ago
7.0 - 12.0 years
18 - 25 Lacs
Noida, Gurugram, Bengaluru
Work from Office
JOB DESCRIPTION Role Expectations: Design, develop, and maintain robust, scalable, and efficient data pipelines Monitor data workflows and systems to ensure reliability and performance Identify and troubleshoot issues related to data flow and database performance Collaborate with cross-functional teams to understand business requirements and translate them into data solutions Continuously optimize existing data processes and architectures. Qualifications: Programming Languages: Proficient in Python and SQL Databases: Strong experience with Amazon Redshift, Aurora, and MySQL Data Engineering: Solid understanding of data warehousing concepts, ETL/ELT processes, and building scalable data pipelines Strong problem-solving and analytical skills Excellent communication and teamwork abilities
Posted 2 weeks ago
7.0 - 10.0 years
9 - 13 Lacs
Mangaluru, Udupi
Hybrid
Cloud Leader (Jr. Data Architect) 7+ yrs of IT experience Should have worked on any two Structural (SQL/Oracle/Postgres) and one NoSQL Database Should be able to work with the Presales team, proposing the best solution/architecture Should have design experience on BQ/Redshift/Synapse Manage end-to-end product life cycle, from proposal to delivery, and regularly check with delivery on architecture improvement Should be aware of security protocols for in-transit data, encryption/decryption of PII data Good understanding of analytics tools for effective analysis of data Should have been part of the production deployment team and the Production Support team. Experience with Big Data tools- Hadoop, Spark, Apache Beam, Kafka etc. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc. Experience in ETL and Data Warehousing. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra, etc. Experience with cloud platforms like AWS, GCP, and Azure. Experience with workflow management using tools like Apache Airflow. Preferred: Need to be Aware of Design Best Practices for OLTP and OLAP Systems Should be part of the team designing the DB and pipeline Should be able to propose the right architecture, Data Warehouse/Datamesh approaches Should be aware of data sharing and multi-cloud implementation Should have exposure to Load testing methodologies, Debugging pipelines, and Delta load handling Worked on heterogeneous migration projects Experience on multiple Cloud platforms Should have exposure to Load testing methodologies, Debugging pipelines, and Delta load handling Roles and Responsibilities Develop high-performance and scalable solutions using GCP that extract, transform, and load big data. Designing and building production-grade data solutions from ingestion to consumption using Java / Python Design and optimize data models on the GCP cloud using GCP data stores such as BigQuery Optimizing data pipelines for performance and cost for large-scale data lakes. Writing complex, highly optimized queries across large data sets and creating data processing layers. Closely interact with Data Engineers to identify the right tools to deliver product features by performing POC Collaborative team player that interacts with business, BAs and other Data/ML engineers Research new use cases for existing data.
Posted 2 weeks ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Design, develop, and operate high scale applications across the full engineering stack Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Manage sole project priorities, deadlines, and deliverables. Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Udupi, Karnataka, India
On-site
Cloud Leader (Jr. Data Architect) 7+ yrs of IT experience Should have worked on any two Structural (SQL/Oracle/Postgres) and one NoSQL Database Should be able to work with the Presales team, proposing the best solution/architecture Should have design experience on BQ/Redshift/Synapse Manage end-to-end product life cycle, from proposal to delivery, and regularly check with delivery on architecture improvement Should be aware of security protocols for in-transit data, encryption/decryption of PII data Good understanding of analytics tools for effective analysis of data Should have been part of the production deployment team and the Production Support team. Experience with Big Data tools- Hadoop, Spark, Apache Beam, Kafka etc. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc. Experience in ETL and Data Warehousing. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra, etc. Experience with cloud platforms like AWS, GCP, and Azure. Experience with workflow management using tools like Apache Airflow. Preferred Need to be Aware of Design Best Practices for OLTP and OLAP Systems Should be part of the team designing the DB and pipeline Should be able to propose the right architecture, Data Warehouse/Datamesh approaches Should be aware of data sharing and multi-cloud implementation Should have exposure to Load testing methodologies, Debugging pipelines, and Delta load handling Worked on heterogeneous migration projects Experience on multiple Cloud platforms Should have exposure to Load testing methodologies, Debugging pipelines, and Delta load handling Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Data Engineer II Mastercard's Data Engineering & Analytics team seeks a Lead Data Engineer to develop data & analytics solutions for vast datasets collected from various consumer-focused businesses. Your role will involve creating high-performance algorithms, cutting-edge analytical techniques, and intuitive workflows to help users derive actionable insights from big data. You will work with large-scale data sets and front-end visualizations to unlock the value of big data and support business needs through innovative data-driven solutions. Role Drive the evolution of data and services platforms with a strong emphasis on data engineering and data science, ensuring impactful advancements in data quality, scalability, and efficiency. Develop and fine-tune methods and algorithms to generate precise, high-quality data at scale, including the creation and maintenance of feature stores, analytical stores and curated datasets for enhanced data integrity and usability. Solve complex data challenges involving multi-layered data sets and optimize the performance of existing data pipelines, libraries, and frameworks. Provide support for deployed data applications and analytical models, identifying data issues and guiding resolutions. Ensure proper data governance policies are followed by implementing or validating Data Lineage, Quality checks, classification, etc. Integrate diverse data sources, including real-time, streaming, batch, and API-based data, to enrich platform insights and drive data-driven decision-making. Experiment with new tools to streamline the development, testing, deployment, and running of our data pipelines. Develop and enforce best practices for data engineering, including coding standards, code reviews, and documentation. Ensure data security and privacy compliance, implementing measures to protect sensitive data. Communicate, collaborate and work effectively in a global environment. All About You Bachelor's degree in Computer Science, Software Engineering, or a related field Extensive hands-on experience in Data Engineering, including implementing multiple end-to-end data warehouse projects in Big Data environments. Proficiency in application development frameworks (Python, Java/Scala) and data processing/storage frameworks (Hadoop, Spark, Kafka). Experience in developing data orchestration workflows using tools such as Apache NiFi, Apache Airflow, or similar platforms to automate and streamline data pipelines. Experience with performance tuning of database schemas, databases, SQL, ETL jobs, and related scripts. Experience of working in Agile teams Experience in development of data-driven applications and data processing workflows/pipelines and/or implementing machine learning systems at scale using Java, Scala, or Python. This includes all phases such as data ingestion, feature engineering, modeling, tuning, evaluating, monitoring, and presenting analytics. Experience in developing integrated cloud applications with services like Azure, Databricks, AWS or GCP. Excellent analytical and problem-solving skills, with the ability to analyze complex data issues and develop practical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with and facilitate activities across cross-functional teams, geographically distributed, and stakeholders. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-247955 Show more Show less
Posted 2 weeks ago
6.0 - 8.0 years
10 - 15 Lacs
Hyderabad
Hybrid
Mega Walkin Drive for Lead Software Engineer/Sr Software Engineer- Data Engineer -Python & Hadoop Your future duties and responsibilities: Job Overview: CGI is looking for a talented and motivated Data Engineer with strong expertise in Python, Apache Spark, HDFS, and MongoDB to build and manage scalable, efficient, and reliable data pipelines and infrastructure Youll play a key role in transforming raw data into actionable insights, working closely with data scientists, analysts, and business teams. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Python and Spark. Ingest, process, and transform large datasets from various sources into usable formats. Manage and optimize data storage using HDFS and MongoDB. Ensure high availability and performance of data infrastructure. Implement data quality checks, validations, and monitoring processes. Collaborate with cross-functional teams to understand data needs and deliver solutions. Write reusable and maintainable code with strong documentation practices. Optimize performance of data workflows and troubleshoot bottlenecks. Maintain data governance, privacy, and security best practices. Required qualifications to be successful in this role: Minimum 6 years of experience as a Data Engineer or similar role. Strong proficiency in Python for data manipulation and pipeline development. Hands-on experience with Apache Spark for large-scale data processing. Experience with HDFS and distributed data storage systems. Strong understanding of data architecture, data modeling, and performance tuning. Familiarity with version control tools like Git. Experience with workflow orchestration tools (e.g., Airflow, Luigi) is a plus. Knowledge of cloud services (AWS, GCP, or Azure) is preferred. Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Preferred Skills: Experience with containerization (Docker, Kubernetes). Knowledge of real-time data streaming tools like Kafka. Familiarity with data visualization tools (e.g., Power BI, Tableau). Exposure to Agile/Scrum methodologies. Skills: Hadoop Hive Python SQL English Notice Period- 0-45 Days Pre requisites : Aadhar Card a copy, PAN card copy, UAN Disclaimer : The selected candidates will initially be required to work from the office for 8 weeks before transitioning to a hybrid model with 2 days of work from the office each week.
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Solution Engineer Location: Chennai-Hybrid Type: C2H Why MResult? Founded in 2004, MResult is a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI. MResult’s expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value. As part of our team, you will collaborate with top minds in the industry to deliver cutting-edge solutions that solve real-world challenges. Website: https://mresult.com/ LinkedIn: https://www.linkedin.com/company/mresult/ What We Offer: At MResult, you can leave your mark on projects at the world’s most recognized brands, access opportunities to grow and upskill, and do your best work with the flexibility of hybrid work models. Great work is rewarded, and leaders are nurtured from within. Our values — Agility, Collaboration, Client Focus, Innovation, and Integrity — are woven into our culture, guiding every decision. What This Role Requires In the role of Solution Engineer , you will be a key contributor to MResult’s mission of empowering our clients with data-driven insights and innovative digital solutions. Each day brings exciting challenges and growth opportunities. Here is what you will do: Roles and responsibilities: Evaluates and implements solutions to meet business requirements, ensuring consistent usage and adherence to data management best practices. Collaborates with product owners to prioritize features and manage technical requirements based on business needs, new technologies, and known issues. Develops application design and documentation for leadership teams. Assists in defining the vision for the shared data model, including sourcing, transformation, and loading approaches. Manages daily operations of the team, ensuring on-time delivery of milestones. Accountable for end-to-end delivery of program outcomes within budget, aligning with relevant business units. Fosters collaboration with internal and external stakeholders, including software vendors and data providers. Works independently with minimal supervision, capable of making recommendations. Demonstrate a solid ability to tell a story with simplistic views of complex datasets. Deliver data reliability, efficiency, and best-in-class data governance, ensuring security and compliance. Will be an integral part of developing best-in-class solution for the GAV organization. Build dashboard and reporting proofs of concept as needed; develop reporting and analysis templates and tools. Work in close collaboration with business teams throughout MASPA (Market Access Strategy Pricing and Analytics) to determine tool functionality/configuration and data requirements to ensure that the analytic capability is supporting the most current business needs. Partner with the Digital Client Partners to align on priorities, processes and governance and ensure experimentation to activate innovation and pipeline value. Must - have Qualifications: Bachelor’s degree in computer science, Software Engineering, or engineering related area. 5+ years of relevant experience emphasizing data modelling, development, or systems engineering. 1+ years with a data visualization tool (e.g. Tableau, Power BI). 2+ Years of experience in any number of the following tools, languages, and databases: (e.g. MySQL, SQL, Aurora DB, Redshift, Snowflake). Demonstrated capabilities in integrating and analysing heterogeneous datasets; Ability to identify trends, identify outliers and find patterns. Demonstrated expertise and capabilities in matrixed, cross-functional teams and influencing without authority. Proven experience and demonstrated skills with AWS services, Tableau, Airflow, Python and Dataiku. Must be experienced in DevSecOps tools JIRA, GitHub. Experience in Database design tools. Deep knowledge of Agile methodologies and SDLC processes. Excellent written, interpersonal, and oral communication skills, communicate and liaise broadly across functions and the global organization. Strong analytical, critical thinking, and troubleshooting skills. Ambition to learn and utilize emerging technologies while working in a stimulating team environment. Nice-to-Have: Advanced degree in Computer Engineering, Computer Science, Information Systems or related discipline. Knowledge with GenAI and LLMs framework (OpenAI, AWS). US Market Access functional knowledge and data literacy. Statistical analysis to understand and improve possible limitations in models. Experience in AI/ML frameworks. Pytest and CI/CD tools. Experience in UI/UX design. Experience in solution architecture & product engineering. Manage, Master, and Maximize with MResult MResult is an equal-opportunity employer committed to building an inclusive environment free of discrimination and harassment. Take the next step in your career with MResult — where your ideas help shape the future. Show more Show less
Posted 2 weeks ago
4.0 - 9.0 years
13 - 18 Lacs
Bengaluru
Work from Office
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. ZS’s India Capability & Expertise Center (CEC) houses more than 60% of ZS people across three offices in New Delhi, Pune and Bengaluru. Our teams work with colleagues around the world to deliver real-world solutions to the clients who drive our business. The CEC maintains standards of analytical, operational and technologic excellence to deliver superior results to our clients. ZS’s Beyond Healthcare Analytics (BHCA) Team is shaping one of the key growth vectors for ZS. Beyond Healthcare engagements are comprised of clients from industries like Quick service restaurants, Technology, Food & Beverage, Hospitality, Travel, Insurance, Consumer Products Goods & other such industries across North America, Europe & South East Asia region. BHCA India team currently has presence across New Delhi, Pune and Bengaluru offices and is continuously expanding further at a great pace. BHCA India team works with colleagues across clients and geographies to create and deliver real world pragmatic solutions leveraging AI SaaS products & platforms, Generative AI applications, and other Advanced analytics solutions at scale. WhatYou’llDo Design and implement highly available data pipelines using spark and other big data technologies Work with data science team to develop new features to increase model accuracy and performance Create standardized data models to increase standardization across client deployments Troubleshooting and resolve issues in existing ETL pipelines. Complete proofs of concept to demonstrate capabilities and connect to new data sources Instill best practices for software development, ensure designs meet requirements, and deliver high-quality work on schedule. Document application changes and development updates. WhatYou’llBring A master’s or bachelor’s degree in computer science or related field from a top university. 4+ years' overall experience; 2+ years’ experience in data engineering using Apache Spark and SQL. 2+ years of experience in building and leading a strong data engineering team. Experience with full software lifecycle methodology, including coding standards, code reviews, source control management, build processes, testing, and operations. In-depth knowledge of python, sql, pyspark, distributed computing, analytical databases and other big data technologies. Strong knowledge of one or more cloud environments such as aws, gcp, and azure. Familiarity with the data science and machine learning development process Familiarity with orchestration tools such as Apache Airflow Strong analytical skills and the ability to develop processes and methodologies. Experience working with cross-functional teams, including UX, business (e.g. Marketing, Sales), product management and/or technology/IT/engineering) is a plus. Characteristics of a forward thinker and self-starter that thrives on new challenges and adapts quickly to learning new knowledge. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At www.zs.com
Posted 2 weeks ago
5.0 - 10.0 years
12 - 16 Lacs
Chennai, Bengaluru
Work from Office
Who We Are Applied Materials is the global leader in materials engineering solutions used to produce virtually every new chip and advanced display in the world. We design, build and service cutting-edge equipment that helps our customers manufacture display and semiconductor chips- the brains of devices we use every day. As the foundation of the global electronics industry, Applied enables the exciting technologies that literally connect our world- like AI and IoT. If you want to work beyond the cutting-edge, continuously pushing the boundaries of"science and engineering to make possible"the next generations of technology, join us to Make Possible® a Better Future. What We Offer Location: Bangalore,IND, Chennai,IND At Applied, we prioritize the well-being of you and your family and encourage you to bring your best self to work. Your happiness, health, and resiliency are at the core of our benefits and wellness programs. Our robust total rewards package makes it easier to take care of your whole self and your whole family. Were committed to providing programs and support that encourage personal and professional growth and care for you at work, at home, or wherever you may go. Learn more about our benefits . Youll also benefit from a supportive work culture that encourages you to learn, develop and grow your career as you take on challenges and drive innovative solutions for our customers."We empower our team to push the boundaries of what is possible"”while learning every day in a supportive leading global company. Visit our Careers website to learn more about careers at Applied. Technical Lead - Software About Applied Applied Materials is the leader in materials engineering solutions used to produce virtually every new chip and advanced display in the world. Our expertise in modifying materials at atomic levels and on an industrial scale enables customers to transform possibilities into reality. At Applied Materials, our innovations make possible the technology shaping the future. Our Team Our team is developing a high-performance computing solution for low-latency and high throughput image processing and deep-learning workloads that will enable our Chip Manufacturing process control equipment to offer differentiated value to our customers. Your Opportunity As a technical lead, you will get the opportunity to grow in the field of high-performance computing, complex system design and low-level optimizations for better cost of ownership. Roles and Responsibility As a technical lead, you will be responsible for designing and implementing High performance computing software solutions for our organization. You will work closely with cross-functional teams, including software engineers, product managers, and business stakeholders, to understand requirements and translate them into architectural/software designs that meet business needs. You will be a subject Matter expert to unblock software engineers in the HPC domain. You will be expected to profile systems to understand bottlenecks, optimize workflows and code and processes to improve cost of ownership. Identify and mitigate technical risks and issues throughout the software development lifecycle. Lead the design and implementation of complex software components and systems. Ensure that software systems are scalable, reliable, and maintainable. Mentor and coach junior software engineers. Your primary focus will be on implementing features of high quality with maintainable and extendable code following software development best practices Our Ideal Candidate Someone who has the drive and passion to learn quickly, has the ability to multi-task and switch contexts based on business needs. Qualifications 5 to 10 years of experience in Design and coding in C/C++ preferably in Linux Environment. Very good knowledge of Data structures, Algorithms and Complexity analysis. In depth experience in Multi-threading, Thread Synchronization, Inter process communication, and Distributed computing fundamentals. Very Good knowledge of Operating systems internals (Linux Preferred), Networking and Storage systems. Experience in performance profiling at application and system level (e.g. vtune, Oprofiler, perf, Nividia Nsight etc.) Experience in low level code optimization techniques using Vectorization and Intrinsics, cache-aware programming, lock free data structures etc. Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Ability to mentor and coach junior team members. Experience in Agile development methodologies. Additional Qualifications: Experience in GPU programming using CUDA, OpenMP, OpenACC, OpenCL etc. Good Knowledge of Work-flow orchestration Software like Apache Airflow, Apache Spark, Apache storm or Intel TBB flowgraph etc. Experience in developing Distributed High Performance Computing software using Parallel programming frameworks like MPI, UCX etc. Experience in HPC Job-Scheduling and Cluster Management Software (SLURM, Torque, LSF etc.) Good knowledge of Low-latency and high-throughput data transfer technologies (RDMA, RoCE, InfiniBand) Familiarity with microservices architecture and containerization technologies (docker/singularity) and low latency Message queues. Education : Bachelors Degree or higher in Computer science or related Disciplines. Applied Materials is committed to diversity in its workforce including Equal Employment Opportunity for Minorities, Females, Protected Veterans and Individuals with Disabilities. Additional Information Time Type: Full time Employee Type: Assignee / Regular Travel: Yes, 10% of the Time Relocation Eligible: Yes Applied Materials is an Equal Opportunity Employer. Qualified applicants will receive consideration for employment without regard to race, color, national origin, citizenship, ancestry, religion, creed, sex, sexual orientation, gender identity, age, disability, veteran or military status, or any other basis prohibited by law.
Posted 2 weeks ago
0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Scala, Java, spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components, , SQL,PostgreSQL , t-sql/pl-sql, Hadoop ( airflow, oozie, hdfs, Sqoop, Hive, Pig, Map Reduce),Shell Scripting, Cloud technologies GCP preferable Mandatory Skill Sets Scala, Spark, GCP Preferred Skill Sets Scala, Spark, GCP Years Of Experience Required 4 - 8 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Master of Engineering, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Good Clinical Practice (GCP) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 12 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 2 weeks ago
6.0 - 11.0 years
18 - 25 Lacs
Hyderabad
Work from Office
SUMMARY Data Modeling Professional Location Hyderabad/Pune Experience: The ideal candidate should possess at least 6 years of relevant experience in data modeling with proficiency in SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools). Key Responsibilities: Develop and configure data pipelines across various platforms and technologies. Write complex SQL queries for data analysis on databases such as SQL Server, Oracle, and HIVE. Create solutions to support AI/ML models and generative AI. Work independently on specialized assignments within project deliverables. Provide solutions and tools to enhance engineering efficiencies. Design processes, systems, and operational models for end-to-end execution of data pipelines. Preferred Skills: Experience with GCP, particularly Airflow, Dataproc, and Big Query, is advantageous. Requirements Requirements: Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to deliver high-quality materials against tight deadlines. Effective under pressure with rapidly changing priorities. Note: The ability to communicate efficiently at a global level is paramount. --- Minimum 6 years of experience in data modeling with SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools). Proficiency in writing complex SQL queries for data analysis. Experience with GCP, particularly Airflow, Dataproc, and Big Query, is an advantage. Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to work effectively under pressure with rapidly changing priorities.
Posted 2 weeks ago
5.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The HiLabs Story HiLabs is a leading provider of AI-powered solutions to clean dirty data, unlocking its hidden potential for healthcare transformation. HiLabs is committed to transforming the healthcare industry through innovation, collaboration, and a relentless focus on improving patient outcomes. HiLabs Team Multidisciplinary industry leaders Healthcare domain experts AI/ML and data science experts Professionals hailing from the worlds best universities, business schools, and engineering institutes including Harvard, Yale, Carnegie Mellon, Duke, Georgia Tech, Indian Institute of Management (IIM), and Indian Institute of Technology (IIT). Job Title : Lead/Senior Data Scientist Job Location : Pune Job summary: HiLabs is looking for highly motivated and skilled Lead/Sr. Data Scientist focused on the application of emerging technologies. The candidates must be well versed with Python, Scala, Spark, SQL and AWS platform. The individuals who will join the new Evolutionary Platform team should be continually striving to advance AI/ML excellence and technology innovation. The mission is to power the next generation of the digital product and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast- paced environment. Responsibilities Leverage AI/ML techniques and solutions to identify and mathematically interpret complex healthcare problems. Full-stack development of data pipelines involving Big Data. Design and development of robust application/data pipelines using Python, Scala, Spark, and SQL Lead a team of Data Scientists, developers as well as clinicians to strategize, design and evaluate AI based solutions to healthcare problems. Increase efficiency and improve the quality of solutions offered. Managing the complete ETL pipeline development process from conception to deployment Collaborating with and guiding the team on writing, building, and deployment of data software Following best design and development practices to ensure high quality code. Design, build and maintain efficient, secure, reusable, and reliable code Perform code reviews, testing, and debugging Desired Profile Bachelor's or Master’s degrees in computer science, Mathematics, or any other quantitative discipline from Premium/Tier 1 institutions 5 to 7 years of experience in developing robust ETL data pipelines and implementing advanced AI/ML algorithms (GenAI is a plus). Strong experience working with technologies like Python, Scala, Spark, Apache Solr, MySQL, Airflow, AWS etc. Experience working with Relational databases like MySQL, SQLServer, Oracle etc. Good understanding of large system architecture and design Understands the core concepts of Machine Learning and the math behind it. Experience working in AWS/Azure cloud environment Experience using Version Control tools such as Bitbucket/GIT code repository Experience using tools like Maven/Jenkins, JIRA Experience working in an Agile software delivery environment, with exposure to continuous integration and continuous delivery tools Great collaboration and interpersonal skills Ability to work with team members and lead by example in code, feature development, and knowledge sharing HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application. HiLabs Total Rewards Competitive Salary, Accelerated Incentive Policies, H1B sponsorship, Comprehensive benefits package that includes ESOPs, financial contribution for your ongoing professional and personal development, medical coverage for you and your loved ones, 401k, PTOs & a collaborative working environment, Smart mentorship, and highly qualified multidisciplinary, incredibly talented professionals from highly renowned and accredited medical schools, business schools, and engineering institutes. CCPA disclosure notice - https://www.hilabs.com/privacy Show more Show less
Posted 2 weeks ago
3.0 - 10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Roles & Responsibilities: Total Experience : 3 to 10 years Languages: Scala/Python 3.x File System: HDFS Frameworks: Spark 2.x/3.x (Batch/SQL API), Hadoop, Oozie/Airflow Databases: HBase, Hive, SQL Server, Teradata Version Control System: GitHub Other Tools: Zendesk, JIRA Mandatory Skill Sets Big Data, Python, Hadoop, Spark Preferred Skill Sets Big Data, Python, Hadoop, Spark Years Of Experience Required 3-10 Years Education Qualification BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Big Data Optional Skills Python (Programming Language) Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 2 weeks ago
3.0 - 10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Roles & Responsibilities: Total Experience : 3 to 10 years Languages: Scala/Python 3.x File System: HDFS Frameworks: Spark 2.x/3.x (Batch/SQL API), Hadoop, Oozie/Airflow Databases: HBase, Hive, SQL Server, Teradata Version Control System: GitHub Other Tools: Zendesk, JIRA Mandatory Skill Sets Big Data, Python, Hadoop, Spark Preferred Skill Sets Big Data, Python, Hadoop, Spark Years Of Experience Required 3-10 Years Education Qualification BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Big Data Optional Skills Python (Programming Language) Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2