Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 Lacs
karnataka
On-site
You should have hands-on experience in Machine Learning Model and deep learning development using python. You will be responsible for Data Quality Analysis and Data preparation, Exploratory Data Analysis, and visualization of data. Additionally, you will define validation strategies, preprocessing or feature engineering on a given dataset, and data augmentation pipelines. Text processing using Machine Learning and NLP for processing documents will also be part of your role. Your tasks will include training models, tuning their hyperparameters, analyzing model errors, and designing strategies to overcome them. You should have experience with python packages such as Numpy, Scipy, Scikit-learn, Theano, TensorFlow, Keras, PyTorch, Pandas, and Matplotlib. Experience in working on Azure open AI studio or openai using python or LLAMA or Langchain is required. Moreover, experience in working on Azure function and python flask/api development/streamlit, prompt engineering, conversational AI, and LLM models like word2Vec, Glove, spacy, BERT embedding models is preferred. You are expected to possess distinctive problem-solving, strategic, and analytical capabilities, as well as excellent time-management and organization skills. Strong knowledge in Programming languages like Python, reactjs, SQL, big data is essential. Excellent verbal and written communication skills are necessary for effective interaction between business and technical architects and developers. You should have 2 - 4 years of relevant experience and a Bachelors Degree in Computer Science, Computer Engineering, masters in computer application, MIS, or a related field. End-to-End development experience in deployment of the Machine Learning model using python and Azure ML studio is required. Exposure in developing client-based or web-based software solutions and Certification of Machine Learning and Artificial Intelligence will be beneficial. Good to have experience in power platform or power pages or Azure OpenAI studio. Grant Thornton INDUS comprises GT U.S. Shared Services Center India Pvt Ltd and Grant Thornton U.S. Knowledge and Capability Center India Pvt Ltd. Grant Thornton INDUS is the shared services center supporting the operations of Grant Thornton LLP, the U.S. member firm of Grant Thornton International Ltd. Established in 2012, Grant Thornton INDUS employs professionals across a wide range of disciplines including Tax, Audit, Advisory, and other operational functions. The culture at Grant Thornton INDUS promotes empowered people, bold leadership, and distinctive client service. Working at Grant Thornton INDUS offers an opportunity to be part of something significant and serves communities in India through inspirational and generous services to give back to the communities they work in. Grant Thornton INDUS has its offices in two locations in India Bengaluru and Kolkata.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
As a Software Engineer in the Direct Platform Quality team at Morningstar's Enterprise Data Platform (EDP), you will play a crucial role in developing and maintaining data quality solutions to enhance Morningstar's client experience. You will collaborate with a quality engineering team to automate the creation of client scorecards and conduct data-specific audit and benchmarking activities. By partnering with key stakeholders like Product Managers and Senior Engineers, you will contribute to the development and execution of data quality control suites. Your responsibilities will include developing and deploying quality solutions using best practices of Software Engineering, building applications and services for Data Quality Benchmarking and Data Consistency Solutions, and adding new features as per the Direct Platform Quality initiatives" product roadmap. You will also be required to participate in periodic calls during US or European hours and adhere to coding standards and guidelines. To excel in this role, you should have a minimum of 3 years of hands-on experience in software engineering with a focus on building and deploying applications for data analytics. Proficiency in Python, Object Oriented Programming, SQL, and AWS Cloud is essential, with AWS certification being a plus. Additionally, expertise in big data open-source technologies, Analytics & ML/AI, public cloud services, and cloud-native architectures is required. Experience in working on Data Analytics and Data Quality projects for AMCs, Banks, Hedge Funds, and designing complex data pipelines in a Cloud Environment will be advantageous. An advanced degree in engineering, computer science, or a related field is preferred, along with experience in the Financial Domain. Familiarity with Agile software engineering practices and mutual fund, fixed income, and equity data is beneficial. At Morningstar, we believe in continuous learning and expect you to stay abreast of software engineering, cloud and data science, and financial research trends. Your contributions to the technology strategy will lead to the development of superior products, streamlined processes, effective communication, and faster delivery times. As our products have a global reach, a global mindset is essential for success in this role. Morningstar is committed to providing an equal opportunity work environment. Our hybrid work model allows for remote work with regular in-person collaboration, fostering a culture of flexibility and connectivity among global colleagues. Join us at Morningstar to be part of a dynamic team that values innovation, collaboration, and personal growth.,
Posted 3 weeks ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
You should have a solid working knowledge of AWS database & data services as well as the Power BI stack. Your experience should include gathering requirements, modeling data, and designing & supporting high-performance big data backend and data visualization systems. Additionally, you should be proficient in utilizing methodologies & platform stacks such as Map Reduce, Spark, Streaming solutions (like Kafka, Kinesis), ETL systems (like Glue, Firehose), storage (like S3), warehouse stacks (like Redshift, DynamoDB), and equivalent open source stacks. Your responsibilities will involve designing & implementing solutions using visualization technologies like Power BI and Quick Sight. You will also need to maintain and continuously groom the product backlog, the release pipeline, and the product roadmap. It is important for you to capture problem statements and opportunities raised by customers as demand items, epics & stories. Furthermore, you will be expected to lead database physical design sessions with the engineers in the team and oversee quality assurance and load tests of the solution to ensure customer experience is maintained. You should also support data governance and data quality (cleansing) efforts. Your primary skills should include expertise in AWS database, data services, PowerBi stack, and Big data.,
Posted 3 weeks ago
6.0 - 10.0 years
0 Lacs
maharashtra
On-site
As a Data Engineer with 6 to 8 years of experience, you will be responsible for designing, developing, and maintaining data pipelines and ETL/ELT processes using Big Data technologies. Your role will involve working extensively with Azure Data Services such as Azure Data Factory, Azure Synapse, Data Lake, and Databricks. You should have a strong knowledge of Big Data ecosystems like Hadoop and Spark, along with hands-on experience in Azure Data Services including ADF, Azure Data Lake, Synapse, and Databricks. Your proficiency in SQL, Python, or Scala for data manipulation and pipeline development will be crucial for this role. Experience with data modeling, data warehousing, and batch/stream processing is required to ensure the quality, integrity, and reliability of data across multiple sources and formats. You will also be expected to handle large-scale data processing using distributed computing tools and optimize the performance of data systems while ensuring security and compliance in the cloud environment. Collaboration with data scientists, analysts, and business stakeholders is an essential part of this role to deliver scalable data solutions. Therefore, understanding of CI/CD, version control (Git), and Agile methodologies will be beneficial in this collaborative environment. If you have a passion for working with data and enjoy solving complex data engineering challenges, this role offers an exciting opportunity to contribute to the development of innovative data solutions.,
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
As a Data Scientist at the AI/ML CoE of Cyient, you will be applying your expertise in artificial intelligence, machine learning, data mining, and information retrieval to design, prototype, and build next-generation advanced analytics engines and services. Your role will involve collaborating with business partners to define technical problem statements, developing efficient analytical models, and incorporating them into analytical data products or tools with the support of a cross-functional team. Your responsibilities will include collaborating with business partners to develop innovative solutions using cutting-edge techniques, effectively communicating the analytics approach to meet objectives, advocating for data-driven decision-making, and leading analytic approaches to integrate work into applications and tools with data engineers, business leads, analysts, and developers. You will also be responsible for creating dynamic and scalable models, engineering features by combining disparate data sources, and sharing your passion for Data Science with the broader enterprise community. To be successful in this role, you should have a Bachelor's degree in Data Science, Computer Science, Engineering, Statistics, or a related field, with 5+ years of experience or a graduate degree in a quantitative discipline with a demonstrated Data Science skill set and 2+ years of work experience. Proficiency in Python, complex SQL queries, and Machine Learning is essential, along with the ability to merge and transform internal and external data sets. Experience with Big Data technologies, data visualization tools, and supporting deployment and maintenance of models is desired. You should possess exceptional communication and collaboration skills, a bias for action, the ability to efficiently learn and solve new business domains and problems, and a healthy skepticism around the validity of data. Intellectual curiosity, humility, exemplary organizational skills, and the ability to coach, mentor, and collaborate with various stakeholders are also key attributes for this role.,
Posted 3 weeks ago
13.0 - 20.0 years
30 - 45 Lacs
Pune
Hybrid
Hi, Wishes from GSN!!! Pleasure connecting with you!!! We been into Corporate Search Services for Identifying & Bringing in Stellar Talented Professionals for our reputed IT / Non-IT clients in India. We have been successfully providing results to various potential needs of our clients for the last 20 years. At present, GSN is hiring DATA ENGINEERING - Solution Architect for one of our leading MNC client. PFB the details for your better understanding : 1. WORK LOCATION : PUNE 2. Job Role: DATA ENGINEERING - Solution Architect 3. EXPERIENCE : 13+ yrs 4. CTC Range: Rs. 35 LPA to Rs. 50 LPA 5. Work Type : WFO Hybrid ****** Looking for SHORT JOINERS ****** Job Description : Who are we looking for : Architectural Vision & Strategy: Define and articulate the technical vision, strategy and roadmap for Big Data, data streaming, and NoSQL solutions , aligning with overall enterprise architecture and business goals. Required Skills : 13+ years of progressive EXP in software development, data engineering and solution architecture roles, with a strong focus on large-scale distributed systems. Expertise in Big Data Technologies: Apache Spark: Deep expertise in Spark architecture, Spark SQL, Spark Streaming, performance tuning, and optimization techniques. Experience with data processing paradigms (batch and real-time). Hadoop Ecosystem: Strong understanding of HDFS, YARN, Hive and other related Hadoop components . Real-time Data Streaming: Apache Kafka: Expert-level knowledge of Kafka architecture, topics, partitions, producers, consumers, Kafka Streams, KSQL, and best practices for high-throughput, low-latency data pipelines. NoSQL Databases: Couchbase: In-depth experience with Couchbase OR MongoDB OR Cassandra), including data modeling, indexing, querying (N1QL), replication, scaling, and operational best practices. API Design & Development: Extensive experience in designing and implementing robust, scalable and secure APIs (RESTful, GraphQL) for data access and integration. Programming & Code Review: Hands-on coding proficiency in at least one relevant language ( Python, Scala, Java ) with a preference for Python and/or Scala for data engineering tasks. Proven experience in leading and performing code reviews, ensuring code quality, performance, and adherence to architectural guidelines. Cloud Platforms: Extensive EXP in designing and implementing solutions on at least one major cloud platform ( AWS, Azure, GCP ), leveraging their Big Data, streaming, and compute services . Database Fundamentals: Solid understanding of relational database concepts, SQL, and data warehousing principles. System Design & Architecture Patterns: Deep knowledge of various architectural patterns (e.g., Microservices, Event-Driven Architecture, Lambda/Kappa Architecture, Data Mesh ) and their application in data solutions. DevOps & CI/CD: Familiarity with DevOps principles, CI/CD pipelines, infrastructure as code (IaC) and automated deployment strategies for data platforms . ****** Looking for SHORT JOINERS ****** Interested, don't hesitate to call NAK @ 9840035825 / 9244912300 for IMMEDIATE response. Best, ANANTH | GSN | Google review : https://g.co/kgs/UAsF9W
Posted 3 weeks ago
6.0 - 11.0 years
14 - 18 Lacs
Noida
Work from Office
Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytm’s mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology. About the role Your responsibility as a Lead Cassandra database administrator (DBA) will be the performance, integrity and security of a database. You'll be involved in the planning and development of the database, as well as in troubleshooting any issues on behalf of the users. Requirement : 6+ Years of experience in Configure, install, and manage Cassandra clusters. Manage node addition and deletion inCassandraclusters. Monitor Cassandraclusters and implement performance monitoring. Configure multi-DCCassandraclusters . Optimize Cassandraperformance , including query optimization and other related optimization tools and techniques. Implement and maintain Cassandrasecurity and integrity controls , including backup and disaster recovery strategies . Upgrade Cassandraclusters . Utilize cqlsh, Grafana, and Prometheus for monitoring and administration. Design and create database objects such as data table structures (Cassandracolumn families/tables). Perform data migration, backup, restore & recovery forCassandra. Resolve performance issues, including blocking and deadlocks (as applicable inCassandra's distributed context). Implement and maintain security and integrity controls including backup and disaster recovery strategies for document management systems and MySQL databases (if these are part of theCassandraAdministrator's broader scope, otherwise this point might be less relevant for a pureCassandrarole). Translate business requirements into technical design specifications and prepare high-level documentation for database design and database objects. Extensive work experience on query optimization, script optimization , and other related optimization tools and techniques . Strong understanding of Cassandradatabase architecture . Experience with backup and recovery procedures specific toCassandra. Knowledge of database security best practices . Experience with data migration and transformation . Ability to work under pressure and meet deadlines. Why join us A collaborative output driven program that brings cohesiveness across businesses through technology Improve the average revenue per use by increasing the cross-sell opportunities A solid 360 feedbacks from your peer teams on your support of their goals Compensation: If you are the right fit, we believe in creating wealth for you with enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It’s your opportunity to be a part of the story!
Posted 3 weeks ago
5.0 - 10.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Overall Responsibilities: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations. Software Requirements: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Familiarity with Hadoop, Kafka, and other distributed computing tools. Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Strong scripting skills in Linux. Category-wise Technical Skills: PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Experience: 5-12 years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Proven track record of implementing data engineering best practices. Experience in data ingestion, transformation, and optimization on the Cloudera Data Platform. Day-to-Day Activities: Design, develop, and maintain ETL pipelines using PySpark on CDP. Implement and manage data ingestion processes from various sources. Process, cleanse, and transform large datasets using PySpark. Conduct performance tuning and optimization of ETL processes. Implement data quality checks and validation routines. Automate data workflows using orchestration tools. Monitor pipeline performance and troubleshoot issues. Collaborate with team members to understand data requirements. Maintain documentation of data engineering processes and configurations. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in PySpark and Cloudera technologies are a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent verbal and written communication abilities. Ability to work independently and collaboratively in a team environment. Attention to detail and commitment to data quality. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice
Posted 3 weeks ago
5.0 - 10.0 years
5 - 9 Lacs
Ahmedabad, Remote
Work from Office
Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in distributed environments (e.g., AWS, Databricks, Hadoop). - Document parsing logic, data lineage, and optimization strategies for maintainability. Qualifications : - 5+ years of hands-on experience with PySpark and Spark XML libraries (e.g., `spark-xml`) in production environments. - Proven track record of parsing XML data with 20+ levels of nesting using recursive methods and schema inference. - Expertise in XPath, XQuery, and DataFrame transformations (e.g., `explode`, `struct`, `selectExpr`) for hierarchical data. - Strong understanding of Spark optimization techniques: partitioning strategies, broadcast variables, and memory management. - Experience with distributed computing frameworks (e.g., Hadoop, YARN) and cloud platforms (AWS, Azure, GCP). - Familiarity with big data file formats (Parquet, Avro) and orchestration tools (Airflow, Luigi). - Bachelor's degree in Computer Science, Data Engineering, or a related field. Preferred Skills : - Experience with schema evolution and versioning for nested XML/JSON datasets. - Knowledge of Scala or Java for extending Spark XML libraries. - Exposure to Databricks, Delta Lake, or similar platforms. - Certifications in AWS/Azure big data technologies.
Posted 3 weeks ago
4.0 - 9.0 years
8 - 13 Lacs
Pune, Anywhere in /Multiple Locations
Work from Office
Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks.- Architect scalable and efficient data models and storage solutions on the Databricks platform.- Collaborate with architects and other teams to migrate current solution to use Databricks.- Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements.- Use best practices for data governance, security, and compliance on the Databricks platform.- Mentor junior engineers and provide technical guidance.- Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field.- 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform.- Proficiency in programming languages such as Python, Scala, or SQL.- Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark.- Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services.- Proven track record of delivering scalable and reliable data solutions in a fast-paced environment.- Excellent problem-solving skills and attention to detail.- Strong communication and collaboration skills with the ability to work effectively in cross-functional teams.- Good to have experience with containerization technologies such as Docker and Kubernetes.- Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 3 weeks ago
18.0 - 23.0 years
15 - 19 Lacs
Hyderabad
Work from Office
About the Role We are seeking a highly skilled and experienced Data Architect to join our team. The ideal candidate will have at least 18 years of experience in Data engineering and Analytics and a proven track record of designing and implementing complex data solutions. As a senior principal data architect, you will be expected to design, create, deploy, and manage Blackbauds data architecture. This role has considerable technical influence within the Data Platform, Data Engineering teams, and the Data Intelligence Center of Excellence at Blackbaud. This individual acts as an evangelist for proper data strategy with other teams at Blackbaud and assists with the technical direction, specifically with data, of other projects. What you'll do Develop and direct the strategy for all aspects of Blackbauds Data and Analytics platforms, products and services Set, communicate and facilitate technical direction more broadly for the AI Center of Excellence and collaboratively beyond the Center of Excellence Design and develop breakthrough products, services or technological advancements in the Data Intelligence space that expand our business Work alongside product management to craft technical solutions to solve customer business problems. Own the technical data governance practices and ensures data sovereignty, privacy, security and regulatory compliance. Continuously challenging the status quo of how things have been done in the past. Build data access strategy to securely democratize data and enable research, modelling, machine learning and artificial intelligence work. Help define the tools and pipeline patterns our engineers and data engineers use to transform data and support our analytics practice Work in a cross-functional team to translate business needs into data architecture solutions. Ensure data solutions are built for performance, scalability, and reliability. Mentor junior data architects and team members. Keep current on technologydistributed computing, big data concepts and architecture. Promote internally how data within Blackbaud can help change the world. What you'll bring 18+ years of experience in data and advanced analytics At least 8 years of experience working on data technologies in Azure/AWS Expertise in SQL and Python Expertise in SQL Server, Azure Data Services, and other Microsoft data technologies. Expertise in Databricks, Microsoft Fabric Strong understanding of data modeling, data warehousing, data lakes, data mesh and data products. Experience with machine learning Excellent communication and leadership skills. Preferred Qualifications Experience working with .Net/Java and Microservice Architecture Stay up to date on everything Blackbaud, follow us on Linkedin, X, Instagram, Facebook and YouTube Blackbaud is proud to be an equal opportunity employer and is committed to maintaining an inclusive work environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, physical or mental disability, age, or veteran status or any other basis protected by federal, state, or local law.
Posted 3 weeks ago
5.0 - 10.0 years
8 - 12 Lacs
Hyderabad
Work from Office
About the role The Data Intelligence Center of Excellence is looking for a high-performing Senior Data Scientist to support Blackbaud customers through the creation and maintenance of intelligent data products . Additionally, the senior data scientist will collaborate with team members on research and thought leadership initiatives . What youll do Use statistical techniques to manage and analyze large volumes of complex data to generate compelling insights to include predictive modeling, storytelling, and data visualization I ntegrate data from multiple sources to create dashboards and other end-user reports Interact with internal customers to identify and define topics for research and experimentation Contribute to white papers, presentations, and conferences as needed C ommunicate insights and findings from analys e s to product, service, and business managers Work with data science team to automate and streamline modeling processes Manage standard tables and programs within the data science infrastructure , providing updates as needed Maintain updated documentation of products and processes Participate in team planning and backlog grooming for data science roadmap What you'll bring We are seeking a Data Scientist with 5+ years of hands-on experience demonstrating strong proficiency in the following areas 2 + years of machine learning and /or statistics experience 2 + years of experience with data analysis in Python, R, SQL, Spark, or similar Comfortable asking questions and performing in-depth research when given vague or incomplete specifications Confidence to learn product functionality on own initiative via back-end research, online training resources, product manuals, and developer forums Experience with Databricks, Databricks SQL Analytics is a plus Stay up to date on everything Blackbaud, follow us on Linkedin, X, Instagram, Facebook and YouTube Blackbaud is proud to be an equal opportunity employer and is committed to maintaining an inclusive work environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, physical or mental disability, age, or veteran status or any other basis protected by federal, state, or local law.
Posted 3 weeks ago
10.0 - 18.0 years
50 - 65 Lacs
Noida
Work from Office
Role & responsibilities Havells considers Analytics & AI to play a critical role in driving growth, in shaping consumer experience, sales optimization and cost savings. This role will require leading the Data Science, AI & Data Engineering vertical for Havells, building the team as well as capability to drive significant business impact using AI, Predictive Models, Machine Learning & advanced analytics. The role will require delivering various use cases & AI projects in multiple domains like Consumer, Sales Ops, Retail, Manufacturing or SCM. The candidate should be a passionate Data/Analytics/AI seasoned leader, with proven capability in building a high quality team that can envision & deliver high impact analytics & AI use cases for the organization. Key responsibilities: Analytics & Data Science Strategy Formulation & Execution : Participate in the conceptualization of AI & Data Science strategy, develop, execute and sustain; strive to build a practice and organization culture around same Manage & Enrich Big Data Work on Datawarehouse for sales, consumer data, manufacturing data and all the user attributes, think of ways to enrich data (both structured/unstructured) Engage with Data Engineering team on building/maintaining/enhancing Data Lake/Warehouse in MS Azure Databricks (Cloud) Data preparation, new attribute development, preparing Single View of consumers, Single View of Retailers/electricians etc Consumer Insights & Campaign Analysis - Drive adhoc analysis & regular insights from data to generate insights and drive campaign performance Build a Gen AI powered Consumer Insights Factory Support insights from consumer, loyalty, app, sales, transactional data Data mining/AI/ML to support upsell/cross-sell/retention/loyalty/engagement campaigns & target audience identification Purchase behavior/market basket analysis Predictive Analytics & Advanced Data Science - Build & maintain Predictive analytics AI/ML models for use cases in Consumer Domain, Consumer Experience (CX), Service, example: Product recommendations Likely to buy product or services (AMC) Lead scoring conversion Service Risk Scoring or Service Franchise/Technician performance score Likely to be a detractor or Likely to churn Market mix modelling Dashboarding: Build, manage, support various MIS/dashboards via Data Engineering/Visualization team Power BI dashboards & other visualization Adhoc dashboards Analytics & Data Science Other Domains SCM, Sales Op, Manufacturing, Marketing Support AI/ML models for Sales Transformation or SCM or Marketing Use Cases: Market Mix Modeling Retailer/Electrician loyalty program optimization Retailer/partner risk scoring or churn prediction Product placement & channel partner classification Improve forecast accuracy & Out of Stock prediction Deep data mining to support digital analytics, website behavior, app behavior analytics, call center/CS behavior, NPS, retailer & electrician loyalty etc Gen AI Use Cases : Extensively leverage LLM Models, Agentic AI capabilities to solve business use cases : Chatbot for business users, data mining at fingertips, Chatbot for consumers, Service Voice Agent, Manufacturing use cases etc Data Engineering : Build & support all data engineering work to build /sustain a best-in class efficient, data infrastructure, data lake, datamart on cloud Sustain, optimize current migration of EDW to MS Azure Databricks Build Agentic AI platform on vertex Optimize data architecture for efficient outputs on all analytics areas – data visualization, predictive ML Models, Agentic AI etc Integrate new data sources & data types including unstructured data into Databricks data lake Preferred candidate profile 10-18 years of direct experience in predictive analytics, decision science and GenAI in atleast two domains out of the following: consumer, sales operations, Supply chain, manufacturing, in any industry. Hands-on experience & knowledge of modern analytical tools, techniques & software (python, R, SQL, SPSS, SAS). Experience of building & leading team is a must.
Posted 3 weeks ago
4.0 - 9.0 years
6 - 11 Lacs
Gurugram, Bengaluru
Work from Office
Senior Developer/ Lead - Data Science Material+ is hiring for Lead Data Science, We are looking for Senior Developer/ Lead - Data Scientist with strong Generative AI experience, skilled in Python, TensorFlow/PyTorch/Scikit learn, Azure OpenAI GPT, and multi-agent frameworks like LangChain or AutoGen, with strong data preprocessing, feature engineering, and model evaluation expertise. Bonus : familiarity with Big Data tools (Spark, Hadoop, Databricks, SQL/NoSQL) and ReactJS Immediate Joiner Required Minimum Experience: 4+ years in Senior Developer/ Lead - Data Scientist Preferred Location: - Gurgaon/ Bangalore Job Description: - Generative AI: GenAI models (Eg.: Azure OpenAI GPT), Multi-Agent System Architecture: Proficiency in Python and AI/ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Experience with LangChain , AutoGen , or similar frameworks for multi-agent systems. Strong knowledge of Data Science techniques, including data preprocessing, feature engineering, and model evaluation. Nice to have: Familiarity with Big Data tools (e.g., Spark, Hadoop) / Databricks and databases (e.g., SQL, NoSQL). Expertise in ReactJS for building responsive and interactive user interfaces. What We Offer Professional Development and Mentorship. Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work Certified). Health and Family Insurance. 40+ Leaves per year along with maternity & paternity leaves. Wellness, meditation and Counselling sessions
Posted 3 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
This is Adyen Adyen provides payments, data, and financial products in a single solution for customers like Meta, Uber, H&M, and Microsoft - making us the financial technology platform of choice. At Adyen, everything we do is engineered for ambition. For our teams, we create an environment with opportunities for our people to succeed, backed by the culture and support to ensure they are enabled to truly own their careers. We are motivated individuals who tackle unique technical challenges at scale and solve them as a team. Together, we deliver innovative and ethical solutions that help businesses achieve their ambitions faster. Data Engineer We are looking for a Data Engineer to join the Payment Engine Data team in Bengaluru, our newest Adyen office. The main goal of the Payment Engine Data (PED) team is to provide insightful data and solutions for processing payments using all of Adyens payment options. These consist of various data pipelines between various systems, dashboards offering insights into payment processing, internal and external reporting, additional data products, and infrastructure. The ideal candidate is able to understand the business context and relate it to the underlying data requirements. You should also excel at building top-notch data pipelines on our big data platform. At Adyen, your work as a Data Engineer will be vital in forming our data infrastructure and guaranteeing the seamless flow of data across various systems. What you ll do Develop High-Quality Data Pipelines: Design, develop, deploy, and operate ETL/ELT pipelines in PySpark. Your work will directly contribute to the creation of reports, tools, analytics, and datasets for both internal and external use. Collaborative Solution Development: Partner with various teams, engineers, and data analysts to understand data requirements and transform these insights into effective data pipelines. Orchestrate Data Flow: Utilise orchestration tools to manage data pipelines efficiently, experience in Airflow is a significant advantage. Champion Data Best Practices: Advocate for performance, testing, code quality, data validation, data governance, and discoverability. Ensure that the data provided is accurate, performant, and reliable. Performance Optimisation: Identify and resolve performance bottlenecks in data pipelines and systems. Optimise query performance and resource utilisation to meet SLAs and performance requirements, using technologies such as caching, indexing, partitioning, and other Spark optimizations. Knowledge Sharing and Training : Scale your knowledge throughout the organisation, enhancing the overall data literacy. Who you are Experienced in Big Data: At least 5 years of experience working as a Data Engineer or in a similar role. Data & Engineering practices: You possess an expert-level understanding of both Software and Data Engineering practices. Technical Super Star: Highly proficient in tools and languages such as: Python, PySpark, Airflow, Hadoop, Spark, Kafka, SQL, Git, S3. Looker is a plus. Clear Communicator: Skilled at articulating complex data-related concepts and outcomes to a diverse range of stakeholders. Self-starter: Capable of independently recognizing opportunities, devising solutions, leading, prioritizing and owning projects. Innovator: You have an experimental mindset with a launch fast and iterate mentality. Data Culture Champion: Experienced in fostering a data-centric culture within large, technical organizations and setting standards for excellence and continuous improvement. Data positions at Adyen: We know companies handle different definitions for their data-related positions, this is for instance dependent on the size of a company. We categorized and defined all our positions. Have a look at this blogpost to find out! Our Diversity, Equity and Inclusion commitments Studies show that women and members of underrepresented communities apply for jobs only if they meet 100% of the qualifications. Does this sound like you? If so, Adyen encourages you to reconsider and apply. We look forward to your application! What s next? Ensuring a smooth and enjoyable candidate experience is critical for us. We aim to get back to you regarding your application within 5 business days. Our interview process tends to take about 4 weeks to complete, but may fluctuate depending on the role. Learn more about our hiring process here . Don t be afraid to let us know if you need more flexibility. This role is based out of our Bengaluru office. We are an office-first company and value in-person collaboration; we do not offer remote-only roles.
Posted 3 weeks ago
3.0 - 6.0 years
6 - 11 Lacs
Pune
Work from Office
Position Specific Duties - Supporting data engineering pipelines. Required Skills are- AWS, Databricks, Pyspark, SQL Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Skills (competencies) Verbal Communication
Posted 3 weeks ago
8.0 - 13.0 years
18 - 22 Lacs
Mumbai, Chennai, Bengaluru
Work from Office
At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client"s challenges of today and tomorrow.Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. Your role In this role you will play a key role in Data Strategy - We are looking for a 8+ years experience in Data Strategy (Tech Architects, Senior BAs) who will support our product, sales, leadership teams by creating data-strategy roadmaps. The ideal candidate is adept at understanding the as-is enterprise data models to help Data-Scientists/ Data Analysts to provide actionable insights to the leadership. They must have strong experience in understanding data, using a variety of data tools. They must have a proven ability to understand current data pipeline and ensure minimal cost-based solution architecture is created & must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes. Identify, design, and recommend internal process improvementsautomating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. & identify data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to create frameworks for digital twins/ digital threads having relevant experience in data exploration & profiling, involve in data literacy activities for all stakeholders & coordinating with cross functional team ; aka SPOC for global master data Your Profile 8+ years of experience in a Data Strategy role, who has attained a Graduate degree in Computer Science, Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools - Experience with understanding big data toolsHadoop, Spark, Kafka, etc. & experience with understanding relational SQL and NoSQL databases, including Postgres and Cassandra/Mongo dB & experience with understanding data pipeline and workflow management toolsLuigi, Airflow, etc. 5+ years of Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.Postgres/ SQL/ Mongo & 2+ years working knowledge in Data StrategyData Governance/ MDM etc. Having 5+ years of experience in creating data strategy frameworks/ roadmaps, in Analytics and data maturity evaluation based on current AS-is vs to-be framework and in creating functional requirements document, Enterprise to-be data architecture. Relevant experience in identifying and prioritizing use case by for business; important KPI identification opex/capex for CXO"s with 2+ years working knowledge in Data StrategyData Governance/ MDM etc. & 4+ year experience in Data Analytics operating model with vision on prescriptive, descriptive, predictive, cognitive analytics What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. Location - Bengaluru,Mumbai,Chennai,Noida,Pune,Hyderabad
Posted 3 weeks ago
5.0 - 10.0 years
12 - 16 Lacs
Bengaluru
Work from Office
extractapplyapply Key Responsibilities: Mines and extracts data and applies statistics and algorithms necessary to derive insights for Digital Mine products and or services. Supports the generation of an automated insights generation framework for business partners to effectively interpret data Provides actionable insights through data science on Personalization, Search & Navigation, SEO & Promotions, Supply Chain, Services, and other related services . Develops dashboard reports that measure financial results, customer satisfaction, and engagement metrics Conducts deep statistical analysis, including predictive and prescriptive modeling in order to provide the organization a competitive advantage Maintains expert-level knowledge on industry trends, emerging technologies, and new methodologies and applies it to projects Contributes subject-matter expertise on automation and analytical projects, collaborating across functions Translates requirements into an analytical approach; asks the right questions to understand the problem; validates understanding with Stakeholder or Manager Contributes for building the analytic approach to solving a business problem; helps identify the sources, methods, parameters, and procedures to be used; clarifies expectations with stakeholders Leverages deep understanding of statistical techniques and tools to analyze data according to the project plan; communicates with stakeholders to provide updates Prepares final recommendations, ensuring solutions are best-in-class, implementable and scalable in the business Executes plans for measuring impact based on discussions with stakeholders, partners and senior team members Executes projects with full adherence to enterprise project management practices
Posted 3 weeks ago
4.0 - 9.0 years
4 - 9 Lacs
Chennai
Work from Office
Mandatory Skill Set Big Data developer Interested can share your updated resume to karthigaa.chinnasamy@aspiresys.com Thanks & Regards Karthigaa Chinnasamy | HR - Talent Acquisition Mobile: +91- 9092938886 Website: www.aspiresys.com | Blog: http://blog.aspiresys.com
Posted 3 weeks ago
5.0 - 10.0 years
18 - 20 Lacs
Bengaluru
Work from Office
highly skilled and experienced Design and Development Engineer with a minimum of 5 years of relevant expertise in the design and development of industrial processing equipment and solutions for the cakes, pies, cookies, biscuits, crackers, and snacks industry. Responsibilities / Tasks We are seeking a highly skilled and motivated Data Scientist to join our dynamic team. The ideal candidate will have a strong foundation in data analysis, statistical methods, and machine learning techniques. They will be responsible for transforming data into actionable insights to drive decision-making processes and improve business performance. Responsibilities Analyze large datasets to identify trends, patterns, and insights. Develop and implement machine learning models and algorithms. Collaborate with cross-functional teams to understand business requirements and provide data-driven solutions. Present findings and recommendations to stakeholders through reports and visualizations. Ensure data integrity and accuracy by performing regular data validation and cleansing. Stay updated with the latest industry trends and advancements in data science and machine learning. Your Profile / Qualifications Bachelors or Master s degree in Data Science , Statistics, Computer Science, or related field. >5 years p roven experience as a Data Scientist or in a similar role. Strong proficiency in programming languages such as Python, SQL . Experience with data visualization tools like Tableau, Power BI, or Matplotlib. Familiarity with machine learning frameworks and libraries such as TensorFlow, PyTorch , or scikit-learn. Excellent problem-solving skills and attention to detail. Strong communication skills and ability to convey complex information to non-technical stakeholders . Experience with big data technologies such as Hadoop, Spark, or cloud platform . Did we spark your interest? Then please click apply above to access our guided application process.
Posted 3 weeks ago
4.0 - 8.0 years
10 - 14 Lacs
Pune
Work from Office
Job Title: Big Data Engineer - Scala Job Description: Preferred Skills: - Strong skills in - Messaging Technologies like Apache Kafka or equivalent, Programming skill Scala, Spark with optimization techniques, Python Should able to write the query through Jupyter Notebook Orchestration tool like NiFi, AirFlow Design and implement intuitive, responsive UIs that allow issuers to better understand data and analytics Experience with SQL & Distributed Systems. Strong understanding of Cloud architecture. Ensure a high-quality code base by writing and reviewing performance, well-tested code Demonstrated experience building complex products. Knowledge of Splunk or other alerting and monitoring solutions. Fluent in the use of Git, Jenkins. Broad understanding of Software Engineering Concepts and Methodologies is required.
Posted 3 weeks ago
3.0 - 6.0 years
9 - 13 Lacs
Ahmedabad
Work from Office
About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll Be Expected To Have : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 3 to 6 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 3 weeks ago
5.0 - 8.0 years
9 - 14 Lacs
Hyderabad
Work from Office
Job Description Summary The Sr Data Analyst - BI Reporting will play a key role in developing end-to-end reporting solutions from data collection and transformation to report generation and visualization. This role involves working on the cutting edge of data engineering and analytics leveraging machine learning predictive modeling and generative AI to drive business insights. Job Description Roles and Responsibilities Design visualizations and create dashboards/reports using Power BI (good to have Tableau experience). Explore clean and visualize data sets to prepare for analysis/reporting ensuring data quality and consistency. Develop and maintain BI semantic data models for large-scale data Warehouses/ Data Lakes eventually getting consumed by reporting tools. Leverage SQL and big data tools (e. g. Hadoop Spark) for data manipulation and optimization. Build advanced data models and pipelines using SQL and other tools. Ensure data quality consistency and integrity throughout the data lifecycle. Collaborate closely with data engineers analysts and other stakeholders to understand data requirements and optimize the data flow architecture. Document data processes data architecture modelling/flow charts and best practices for future reference and knowledge sharing. Desired Characteristics Technical Expertise 5 to 8 years of experience in data analytics data mining/integration BI development reporting and insights. Strong knowledge of SQL and experience with big data technologies such as Hadoop Spark or similar tool for data massaging / manipulation. Develop advanced visualization/reports to highlight trends patterns and outliers making complex data easily understandable for various business functions. Implement UI/UX best practices to improve navigation data storytelling and the overall usability of dashboards ensuring that reports are actionable and user-friendly providing the desired insights. #LI-CK1 Additional Information Relocation Assistance Provided: Yes
Posted 3 weeks ago
4.0 - 6.0 years
11 - 12 Lacs
Bengaluru
Work from Office
Job Description: Data Engineering - Big Data, ETL , data Job Profile Summary: In this role, you will support the Data Engineering team in setting up the Data Lake on Cloud and the implementation of standardized Data Model. You will develop data pipelines for new sources, data transformations within the Data Lake, CI/CD and data delivery as per the business requirements. Job Description: Build pipelines to bring in wide variety of data from multiple sources within the organization as well as from social media and public data sources. Collaborate with cross functional teams to source data and make it available for downstream consumption. Work with the team to provide an effective solution design to meet business needs. Ensure regular communication with key stakeholders, understand any key concerns in how the initiative is being delivered or any risks/issues that have either not yet been identified or are not being progressed. Ensure dependencies and challenges (risks) are escalated and managed. Escalate critical issues to the Sponsor and/or Head of Data Engineering. Ensure timelines (milestones, decisions and delivery) are managed and value of initiative is achieved, without compromising quality and within budget. Ensure an appropriate and coordinated communications plan is in place for initiative execution and delivery, both internal and external. Ensure final handover of initiative to business as usual processes, carry out a post implementation review (as necessary) to ensure initiative objectives have been delivered, and any lessons learned are fed into future initiative management processes. Who we are looking for: Competencies & Personal Traits Work as a team player Excellent problem analysis skills Experience with at least one Cloud Infra provider (Azure/AWS) Experience in building data pipelines using batch processing with Apache Spark (Spark SQL, Dataframe API) or Hive query language (HQL) Knowledge of Big data ETL processing tools Experience with Hive and Hadoop file formats (Avro / Parquet / ORC) Basic knowledge of scripting (shell / bash) Experience of working with multiple data sources including relational databases (SQL Server / Oracle / DB2 / Netezza), NoSQL / document databases, flat files Basic understanding of CI CD tools such as Jenkins, JIRA, Bitbucket, Artifactory, Bamboo and Azure Dev-ops. Basic understanding of DevOps practices using Git version control Ability to debug, fine tune and optimize large scale data processing jobs Working Experience 1-3 years of broad experience of working with Enterprise IT applications in cloud platform and big data environments. Professional Qualifications Certifications related to Data and Analytics would be an added advantage Education Master/bachelor s degree in STEM (Science, Technology, Engineering, Mathematics) Language Fluency in written and spoken English EXPERIENCE 4.5-6 Years SKILLS Primary Skill: Data Engineering Sub Skill(s): Data Engineering Additional Skill(s): Azure Datalake, Azure Data Factory
Posted 3 weeks ago
10.0 - 12.0 years
20 - 25 Lacs
Pune
Work from Office
Key Responsibilities : As an Enterprise Data Architect, you will : - Lead Data Architecture : Design, develop, and implement comprehensive enterprise data architectures, primarily leveraging Azure and Snowflake platforms. - Data Transformation & ETL : Oversee and guide complex data transformation and ETL processes for large and diverse datasets, ensuring data integrity, quality, and performance. - Customer-Centric Data Design : Specialize in designing and optimizing customer-centric datasets from various sources, including CRM, Call Center, Marketing, Offline, and Point of Sale systems. - Data Modeling : Drive the creation and maintenance of advanced data models, including Relational, Dimensional, Columnar, and Big Data models, to support analytical and operational needs. - Query Optimization : Develop, optimize, and troubleshoot complex SQL and NoSQL queries to ensure efficient data retrieval and manipulation. - Data Warehouse Management : Apply advanced data warehousing concepts to build and manage high-performing, scalable data warehouse solutions. - Tool Evaluation & Implementation : Evaluate, recommend, and implement industry-leading ETL tools such as Informatica and Unifi, ensuring best practices are followed. - Business Requirements & Analysis : Lead efforts in business requirements definition and management, structured analysis, process design, and use case documentation to translate business needs into technical specifications. - Reporting & Analytics Support : Collaborate with reporting teams, providing architectural guidance and support for reporting technologies like Tableau and PowerBI. - Software Development Practices : Apply professional software development principles and best practices to data solution delivery. - Stakeholder Collaboration : Interface effectively with sales teams and directly engage with customers to understand their data challenges and lead them to successful outcomes. - Project Management & Multi-tasking : Demonstrate exceptional organizational skills, with the ability to manage and prioritize multiple simultaneous customer projects effectively. - Strategic Thinking & Leadership : Act as a self-managed, proactive, and customer-focused leader, driving innovation and continuous improvement in data architecture. Position Requirements : of strong experience with data transformation & ETL on large data sets. - Experience with designing customer-centric datasets (i.e., CRM, Call Center, Marketing, Offline, Point of Sale, etc.). - 5+ years of Data Modeling experience (i.e., Relational, Dimensional, Columnar, Big Data). - 5+ years of complex SQL or NoSQL experience. - Extensive experience in advanced Data Warehouse concepts. - Proven experience with industry ETL tools (i.e., Informatica, Unifi). - Solid experience with Business Requirements definition and management, structured analysis, process design, and use case documentation. - Experience with Reporting Technologies (i.e., Tableau, PowerBI). - Demonstrated experience in professional software development. - Exceptional organizational skills and ability to multi-task simultaneous different customer projects. - Strong verbal & written communication skills to interface with sales teams and lead customers to successful outcomes. - Must be self-managed, proactive, and customer-focused. Technical Skills : - Cloud Platforms : Microsoft Azure - Data Warehousing : Snowflake - ETL Methodologies : Extensive experience in ETL processes and tools - Data Transformation : Large-scale data transformation - Data Modeling : Relational, Dimensional, Columnar, Big Data - Query Languages : Complex SQL, NoSQL - ETL Tools : Informatica, Unifi (or similar enterprise-grade tools) - Reporting & BI : Tableau, PowerBI
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France