Jobs
Interviews

40 Etlelt Processes Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

At Capgemini Invent, we believe that difference drives change. As inventive transformation consultants, we combine our strategic, creative, and scientific capabilities to collaborate closely with clients in delivering cutting-edge solutions. Join our team to lead transformation customized to address our client's challenges of today and tomorrow, informed and validated by science and data, superpowered by creativity and design, all underpinned by purpose-driven technology. What you will appreciate about working with us: We acknowledge the importance of flexible work arrangements to provide support. Whether it's remote work or flexible work hours, you will find an environment that fosters a healthy work-life balance. At the core of our mission lies your career growth. Our array of career growth programs and diverse professions are designed to assist you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. Your Role: We are seeking a skilled PySpark Developer with expertise in Azure Databricks (ADB) and Azure Data Factory (ADF) to become a part of our team. The ideal candidate will have a pivotal role in designing, developing, and implementing data solutions using PySpark for large-scale data processing and analytics. Your Profile: - Design, develop, and deploy PySpark applications and workflows on Azure Databricks for data transformation, cleansing, and aggregation. - Implement data pipelines using Azure Data Factory (ADF) to orchestrate ETL/ELT processes across heterogeneous data sources. - Conduct regular financial risk assessments to identify potential vulnerabilities in data processing workflows. - Collaborate with Data Engineers and Data Scientists to integrate and process structured and unstructured data sets into actionable insights. Capgemini is a global business and technology transformation partner, aiding organizations in accelerating their dual transition to a digital and sustainable world while making a tangible impact for enterprises and society. With a responsible and diverse group of 340,000 team members in more than 50 countries, Capgemini, with its strong over 55-year heritage, is trusted by clients to unlock the value of technology to address the entire breadth of their business needs. It provides end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market-leading capabilities in AI, generative AI, cloud, and data, combined with its deep industry expertise and partner ecosystem.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Architect with over 4 years of experience, you will be responsible for designing and optimizing data pipelines to integrate various data sources in order to support business intelligence and advanced analytics. Your role will involve developing data models and flows to enable personalized customer experiences and support omnichannel marketing and customer engagement. You will lead efforts to ensure data governance, data quality, and data security, ensuring compliance with regulations such as GDPR and CCPA. Additionally, you will implement and maintain data warehousing solutions in Snowflake to handle large-scale data processing and analytics needs. Your responsibilities will also include optimizing workflows to streamline data transformation and modeling processes. You will leverage Azure for cloud infrastructure, data storage, and real-time data analytics, ensuring that the architecture supports scalability and performance. Collaboration with cross-functional teams, including data engineers, analysts, and business stakeholders, will be essential to ensure that data architectures meet business needs. Supporting both real-time and batch data integration will be crucial to make data accessible for actionable insights and decision-making. It will also be your responsibility to continuously assess and integrate new data technologies and methodologies to enhance the organization's data capabilities. To qualify for this role, you should have at least 4 years of experience in Data Architecture or Data Engineering, with expertise in Snowflake and Azure. A strong understanding of data modeling, ETL/ELT processes, and modern data architecture frameworks is required. Experience in designing scalable data architectures for personalization and customer analytics across marketing, sales, and customer service domains is essential. You should also have expertise with cloud data platforms (preferably Azure) and Big Data technologies for large-scale data processing. Hands-on experience with Python for data engineering tasks and scripting is also preferred. Primary Skills: - Around Relevant 3+ years of Hands-on experience in DBT, Snowflake, CICD, Python (Nice to have), SQL - Taking ownership of tasks - Eager to learn, good communication skills, enthusiastic to upskill If you are a motivated and experienced Data Architect looking to work on challenging projects in a dynamic environment, we encourage you to apply for this position.,

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Technical Solution Analyst, you will be responsible for designing, developing, and automating solutions to enhance business processes. Your main tasks will involve working on TS configuration, data preparation, problem-solving, and automation while collaborating with stakeholders to drive efficiency through data-driven insights and technical expertise. It is crucial for you to possess a strong analytical mindset, programming skills, and prior experience in business intelligence and automation tools. In this role, you will: - Develop and configure TSA & TSE solutions tailored to meet business requirements. - Manage TS product configuration and offer troubleshooting expertise. - Prepare, clean, and oversee data for reporting, automation, and analytics purposes. - Utilize analytical and programming skills to identify and resolve technical and business challenges. - Align solutions with business objectives by mapping use cases effectively. - Create tools and automation solutions to optimize processes. - Utilize SQL databases, BI tools (such as Tableau, Power BI, Looker), and ETL/ELT pipelines. - Integrate APIs and create scripts to streamline automation and enhance efficiency. - Collaborate with stakeholders, document technical solutions, and provide support for debugging. To be successful in this role, you should have: - 2-5 years of experience in a similar role involving data, automation, and solution development. - Proficiency in data analysis and visualization. - Strong skills in SQL and database management. - Hands-on experience with BI tools like Tableau, Power BI, and Looker. - Knowledge of ETL/ELT processes for data transformation. - Programming expertise in Python, R, or JavaScript. - Experience with API integration and automation frameworks. - Strong problem-solving and debugging abilities. - Excellent communication with stakeholders and documentation skills. - Previous UI development experience would be a bonus. ThoughtSpot offers a dynamic work environment as the experience layer of the modern data stack, leading with AI-powered analytics and natural language search capabilities. The company values diversity, unique perspectives, and a culture of Selfless Excellence and continuous improvement. ThoughtSpot encourages individuals from all backgrounds to apply, as diversity and inclusivity are essential for innovative solutions and business success. If you are passionate about working with a talented team and contributing to an innovative company, we invite you to explore more about our mission and apply for a role that aligns with your skills and aspirations.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

At Dario, every day presents a new opportunity for you to make a difference. The company is dedicated to simplifying better health, and your contribution as an employee directly impacts the health improvement of hundreds of thousands of individuals worldwide. If you are passionate, intelligent, and a team player, with a drive to make a meaningful impact in your career, then we are looking for you. Your responsibilities will include designing and implementing test plans to validate data pipelines, transformations, and business logic. You will be required to conduct data validation and reconciliation across various systems, such as Snowflake and DBT. Developing automated data quality checks using Python, SQL, and testing frameworks will also be a key part of your role. Collaboration with Data Engineering and Analytics teams is essential to ensure the accuracy and relevance of data used for reporting and KPIs. Validating business rules and metrics implemented in dashboards, like Tableau, and ensuring alignment between data definitions, business expectations, and actual outputs are crucial aspects of the job. Identifying data quality issues, reporting defects using JIRA, and seeing them through to resolution will be part of your daily tasks. Maintaining test documentation, including STP, STD, STR for data validation processes, and monitoring scheduled jobs and alerting systems for data pipeline failures will also be within your scope. To excel in this role, you should have at least 5 years of experience in QA or data quality roles and a strong proficiency in Python and SQL. Experience in testing ETL/ELT processes, preferably in Snowflake or similar DWH platforms, is necessary. Familiarity with data pipeline tools like Airflow, DBT, Fivetran, etc., and a strong understanding of QA methodologies (STP/STD/STR) are essential. Excellent communication skills, the ability to quickly adapt to new tools and systems, a business-minded approach to understand and validate KPIs and reporting logic, and the capability to work independently and proactively are all necessary qualities for this role. Preferred qualifications include experience with test automation tools like PyTest, Great Expectations, validating dashboards in Tableau or similar tools, familiarity with Git, JIRA, and CI/CD processes, and knowledge of healthcare data or analytics, which is considered a plus.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Developer (Japanese-speaking), you will be responsible for supporting projects in the Japan region by utilizing your technical expertise and Japanese language proficiency. This hands-on role will primarily focus on backend development, data engineering, and cloud technologies. Candidates with prior experience working in Japan or those looking to relocate from Japan are highly desired. Your key responsibilities will include designing and developing ETL/ELT pipelines using Azure or equivalent cloud platforms, collaborating with Japanese-speaking stakeholders and internal teams, working with Azure Data Factory, Synapse, Data Lake, and Power BI for data integration and reporting, as well as participating in technical discussions, requirement gathering, and solution design. You will also be expected to ensure timely delivery of project milestones while upholding code quality and documentation standards. To excel in this role, you should possess 3-5 years of experience in data engineering or backend development, proficiency in SQL, Python, and ETL/ELT processes, hands-on experience with Azure Data Factory, Synapse, Data Lake, and Power BI, a strong understanding of cloud architecture (preferably Azure, but AWS/GCP are acceptable), and at least JLPT N3 certification with a preference for N2 or N1 to effectively communicate in Japanese. A Bachelor's degree in Computer Science, Engineering, or a related field is also required. Preferred candidates for this position include individuals who have worked in Japan for at least 2 years and are now relocating to India, or those currently based in Japan and planning to relocate within a month. While Bangalore is the preferred location, Kochi is also acceptable for relocation. Candidates from other regions, such as Noida or Gurgaon, will not be considered unless relocation to the specified locations is confirmed. The interview process will consist of a technical evaluation conducted by Suresh Varghese in the first round, followed by a Japanese language proficiency assessment. At least one round of the interview must be conducted face-to-face for shortlisted candidates to assess their suitability for the role.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

ahmedabad, gujarat

On-site

DXFactor is a US-based tech company working with customers globally. We are a certified Great Place to Work and currently seeking candidates for the role of Data Engineer with 4 to 6 years of experience. Our presence spans across the US and India, specifically in Ahmedabad. As a Data Engineer at DXFactor, you will be expected to specialize in SnowFlake, AWS, and Python. Key Responsibilities: - Design, develop, and maintain scalable data pipelines for both batch and streaming workflows. - Implement robust ETL/ELT processes to extract data from diverse sources and load them into data warehouses. - Build and optimize database schemas following best practices in normalization and indexing. - Create and update documentation for data flows, pipelines, and processes. - Collaborate with cross-functional teams to translate business requirements into technical solutions. - Monitor and troubleshoot data pipelines to ensure optimal performance. - Implement data quality checks and validation processes. - Develop and manage CI/CD workflows for data engineering projects. - Stay updated with emerging technologies and suggest enhancements to existing systems. Requirements: - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 4+ years of experience in data engineering roles. - Proficiency in Python programming and SQL query writing. - Hands-on experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). - Familiarity with data warehousing technologies such as Snowflake, Redshift, and BigQuery. - Demonstrated ability in constructing efficient and scalable data pipelines. - Practical knowledge of batch and streaming data processing methods. - Experience in implementing data validation, quality checks, and error handling mechanisms. - Work experience with cloud platforms, particularly AWS (S3, EMR, Glue, Lambda, Redshift) and/or Azure (Data Factory, Databricks, HDInsight). - Understanding of various data architectures including data lakes, data warehouses, and data mesh. - Proven ability to debug complex data flows and optimize underperforming pipelines. - Strong documentation skills and effective communication of technical concepts.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

You have a total of 4-6 years of development/design experience with a minimum of 3 years experience in Big Data technologies on-prem and on cloud. You should be proficient in Snowflake and possess strong SQL programming skills. Your role will require strong experience with data modeling and schema design, as well as extensive experience in using Data warehousing tools like Snowflake/BigQuery/RedShift and BI Tools like Tableau/QuickSight/PowerBI (at least one must be a must-have). You must also have experience with orchestration tools like Airflow and transformation tool DBT. Your responsibilities will include implementing ETL/ELT processes and building data pipelines, workflow management, job scheduling, and monitoring. You should have a good understanding of Data Governance, Security and Compliance, Data Quality, Metadata Management, Master Data Management, Data Catalog, as well as cloud services (AWS), including IAM and log analytics. Excellent interpersonal and teamwork skills are essential, along with the experience of leading and mentoring other team members. Good knowledge of Agile Scrum and communication skills are also required. At GlobalLogic, the culture prioritizes caring and inclusivity. Youll join an environment where people come first, fostering meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Continuous learning and development opportunities are provided to help you grow personally and professionally. Meaningful work awaits you at GlobalLogic, where youll have the chance to work on impactful projects and engage your curiosity and problem-solving skills. The organization values balance and flexibility, offering various career areas, roles, and work arrangements to help you achieve a perfect balance between work and life. GlobalLogic is a high-trust organization where integrity is key, ensuring a safe, reliable, and ethical global environment for all employees. Truthfulness, candor, and integrity are fundamental values upheld in everything GlobalLogic does. GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner that collaborates with the world's largest and most forward-thinking companies. Leading the digital revolution since 2000, GlobalLogic helps create innovative digital products and experiences, transforming businesses and redefining industries through intelligent products, platforms, and services.,

Posted 1 month ago

Apply

10.0 - 14.0 years

0 Lacs

maharashtra

On-site

As the Technical Lead of Data Engineering at Assent, you will collaborate with various stakeholders including Product Managers, Product Designers, and Engineering team members to identify opportunities and evaluate the feasibility of solutions. Your role will involve offering technical guidance, influencing decision-making, and aligning data engineering initiatives with business objectives as part of Assent's roadmap development. You will be responsible for driving the technical strategy, overseeing team execution, and implementing process improvements to construct resilient and scalable data systems. In addition, you will lead data engineering efforts, mentor a growing team, and establish robust and scalable data infrastructure. Key Requirements & Responsibilities: - Lead the technical execution of data engineering projects to ensure high-quality and timely delivery, covering discovery, delivery, and adoption stages. - Collaborate with Architecture team members to design and implement scalable, high-performance data pipelines and infrastructure. - Provide technical guidance to the team, ensuring adherence to best practices in data engineering, performance optimization, and system reliability. - Work cross-functionally with various teams such as Product Managers, Software Development, Analysts, and AI/ML teams to define and implement data initiatives. - Partner with the team manager to plan and prioritize work, striking a balance between short-term deliverables and long-term technical enhancements. - Keep abreast of emerging technologies and methodologies, advocating for their adoption to accelerate the team's objectives. - Ensure compliance with corporate security policies and follow the established guidelines and procedures of Assent. Qualifications: Your Knowledge, Skills and Abilities: - Possess 10+ years of experience in data engineering, software development, or related fields. - Proficient in cloud data platforms, particularly AWS. - Expertise in modern data technologies like Spark, Airflow, dbt, Snowflake, Redshift, or similar. - Deep understanding of distributed systems and data pipeline design, with specialization in ETL/ELT processes, data warehousing, and real-time streaming. - Strong programming skills in Python, SQL, Scala, or similar languages. - Experience with infrastructure as code tools like Terraform, CloudFormation, and knowledge of DevOps best practices. - Ability to influence technical direction and promote best practices across teams. - Excellent communication and leadership skills, with a focus on fostering collaboration and technical excellence. - A learning mindset, continuously exploring new technologies and best practices. - Experience in security, compliance, and governance related to data systems is a plus. This is not an exhaustive list of duties, and responsibilities may be modified or added as needed to meet business requirements. Life at Assent: At Assent, we are dedicated to cultivating an inclusive environment where team members feel valued, respected, and heard. Our diversity, equity, and inclusion practices are guided by our Diversity and Inclusion Working Group and Employee Resource Groups (ERGs), ensuring that team members from diverse backgrounds are recruited, retained, and provided opportunities to contribute to business success. If you need assistance or accommodation during any stage of the interview and selection process, please reach out to talent@assent.com, and we will be happy to assist you.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Engineer at Perch Energy, you will be a key player in the design, development, and maintenance of our data infrastructure and pipelines. Your collaboration with the Data and Analytics Engineering team, as well as engineering and operations teams, will ensure the smooth flow and availability of high-quality data for analysis and reporting purposes. Your expertise will be crucial in optimizing data workflows, ensuring data integrity, and scaling our data infrastructure to support the company's growth. You will have the opportunity to engage with cutting-edge technology, influence the development of a world-class data ecosystem, and work in a fast-paced environment as part of a small, high-impact team. The core data stack at Perch Energy includes Snowflake and dbt Core, orchestrated in Prefect and Argo within our AWS-based ecosystem. Data from a wide range of sources is loaded using Fivetran or Segment, with custom Python utilized when necessary. Your responsibilities will include designing, developing, and maintaining scalable and efficient data pipelines in an AWS environment, focusing on the Snowflake instance and utilizing tools such as Fivetran, Prefect, Argo, and dbt. Collaboration with business analysts, analytics engineers, and software engineers to understand data requirements and deliver reliable solutions will be essential. Additionally, you will design, build, and maintain tooling that facilitates interaction with the data platform, including CI/CD pipelines, testing frameworks, and command-line tools. To succeed in this role, you should have at least 3 years of experience as a Data Engineer, data-adjacent Software Engineer, or member of a small data team, with a strong focus on building and maintaining data pipelines. Proficiency in Python, SQL, database management, and design is required, along with familiarity with data integration patterns, ETL/ELT processes, and data warehousing concepts. Experience with data orchestration tools like Argo, Prefect, or Airflow is a must, along with excellent problem-solving skills and attention to detail. While not mandatory, an undergraduate or graduate degree in a technical field, experience with AWS, Snowflake, Fivetran, Argo, Prefect, dbt, and Github Actions, as well as DevOps practices, would be advantageous. Previous experience in managing enterprise-level data pipelines and working with large datasets or knowledge of the energy sector would also be beneficial. Perch Energy offers competitive compensation, a remote-first policy, flexible leave policy, medical insurance, annual performance cycle, team engagement activities, L&D programs, and a supportive engineering culture that values diversity, empathy, teamwork, trust, and efficiency. Perch Energy is committed to providing reasonable accommodations for individuals with disabilities throughout the job application, interview process, and employment tenure.,

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

You should possess a Bachelor's degree in Computer Science, Engineering, or a related field along with at least 8 years of work experience in Data First systems. Additionally, you should have a minimum of 4 years of experience working on Data Lake/Data Platform projects specifically on AWS/Azure. It is crucial to have extensive knowledge and hands-on experience with Data warehousing tools such as Snowflake, BigQuery, or RedShift. Proficiency in SQL for managing and querying data is a must-have skill for this role. You are expected to have experience with relational databases like Azure SQL, AWS RDS, as well as an understanding of NoSQL databases like MongoDB for handling various data formats and structures. Familiarity with orchestration tools like Airflow and DBT would be advantageous. Experience in building stream-processing systems using solutions such as Kafka or Azure Event Hub is desirable. Your responsibilities will include designing and implementing ETL/ELT processes using tools like Azure Data Factory to ingest and transform data into the data lake. You should also have expertise in data migration and processing with AWS (S3, Glue, Lambda, Athena, RDS Aurora) or Azure (ADF, ADLS, Azure Synapse, Databricks). Data cleansing and enrichment skills are crucial to ensure data quality for downstream processing and analytics. Furthermore, you must be capable of managing schema evolution and metadata for the data lake, with experience in tools like Azure Purview for data discovery and cataloging. Proficiency in creating and managing APIs for data access, preferably with experience in JDBC/ODBC, is required. Knowledge of data governance practices, data privacy laws like GDPR, and implementing security measures in the data lake are essential aspects of this role. Strong programming skills in languages like Python, Scala, or SQL are necessary for data engineering tasks. Additionally, experience with automation and orchestration tools, familiarity with CI/CD practices, and the ability to optimize data storage and retrieval for analytical queries are key requirements. Collaboration with the Principal Data Architect and other team members to align data solutions with architectural and business goals is crucial. As a lead, you will be responsible for critical system design changes, software projects, and ensuring timely project deliverables. Collaboration with stakeholders to translate business needs into efficient data infrastructure systems is a key aspect of this role. Your ability to review design proposals, conduct code review sessions, and promote best practices is essential. Experience in an Agile model, delivering quality deliverables on time, and translating complex requirements into technical solutions are also part of your responsibilities.,

Posted 1 month ago

Apply

8.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

Job Description: As a Data Scientist at Hitachi Solutions India Pvt Ltd in Pune, India, you will be a valuable member of our dynamic team. Your primary responsibility will be to extract valuable insights from complex datasets, develop advanced analytical models, and drive data-driven decision-making across the organization. With 8-14 years of experience, your primary skills should include Data Science, with secondary skills in Data Engineering/Data Analytics. You will play a pivotal role in working on cutting-edge AI applications with a focus on Natural Language Processing (NLP), Time Series Forecasting, and a working knowledge of Computer Vision (CV) techniques. Your role will involve collaborating with a diverse team of engineers, analysts, and domain experts to build holistic, multi-modal solutions. Your expertise in Python and libraries like Pandas, NumPy, Scikit-learn, HuggingFace Transformers, and Prophet/ARIMA will be essential. Additionally, you should have a strong understanding of the model development lifecycle, from data ingestion to deployment, and hands-on experience with SQL and data visualization tools like Seaborn, Matplotlib, and Tableau. Experience in handling retail-specific data, familiarity with cloud platforms like AWS, GCP, or Azure, and exposure to API development (FastAPI, Flask) for ML model deployment will be beneficial. Knowledge of MLOps practices, previous experience in fine-tuning language models, and expertise in Data Engineering using Azure technologies are desirable skills for this role. Key responsibilities will include applying NLP techniques to extract insights from text data, analyzing historical demand data for Time Series Forecasting, and potentially contributing to Computer Vision projects. Collaboration with cross-functional teams and developing scalable ML components for production environments will be crucial aspects of your role. Qualifications required for this position include a Master's degree in Computer Science, Data Science, Statistics, or a related field, proven experience in data science or machine learning, strong proficiency in Python and SQL, and familiarity with cloud technologies like Azure Databricks and MLflow. Excellent problem-solving skills, strong communication abilities, and the capability to work independently and collaboratively in a fast-paced environment are essential for success in this role. Please be cautious of potential scams during the recruitment process, and all official communication regarding your application and interview requests will be from our @hitachisolutions.com domain email address.,

Posted 1 month ago

Apply

10.0 - 14.0 years

0 Lacs

maharashtra

On-site

As the Technical Lead for Data Engineering at Assent, you will collaborate with other team members to identify opportunities and assess the feasibility of solutions. Your role will involve providing technical guidance, influencing decision-making, and aligning data engineering initiatives with business goals. You will drive the technical strategy, team execution, and process improvements to build resilient and scalable data systems. Mentoring a growing team and building robust, scalable data infrastructure will be essential aspects of your responsibilities. Your key requirements and responsibilities will include driving the technical execution of data engineering projects, working closely with Architecture members to design and implement scalable data pipelines, and providing technical guidance to ensure best practices in data engineering. You will collaborate cross-functionally with various teams to define and execute data initiatives, plan and prioritize work with the team manager, and stay updated with emerging technologies to drive their adoption. To be successful in this role, you should have 10+ years of experience in data engineering or related fields, expertise in cloud data platforms such as AWS, proficiency in modern data technologies like Spark, Airflow, and Snowflake, and a deep understanding of distributed systems and data pipeline design. Strong programming skills in languages like Python, SQL, or Scala, experience with infrastructure as code and DevOps best practices, and the ability to influence technical direction and advocate for best practices are also necessary. Strong communication and leadership skills, a learning mindset, and experience in security, compliance, and governance related to data systems will be advantageous. At Assent, we value your talent, energy, and passion, and offer various benefits to support your well-being, financial health, and personal growth. Our commitment to diversity, equity, and inclusion ensures that all team members are included, valued, and provided with equal opportunities for success. If you require any assistance or accommodation during the interview process, please feel free to contact us at talent@assent.com.,

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Technical Solution Analyst, you will be instrumental in creating, enhancing, and automating solutions to streamline operational procedures. Your responsibilities will include configuring TSA & TSE solutions, managing product setups, and ensuring data accuracy for reporting and analytics. By leveraging your analytical skills and technical proficiency, you will identify and resolve technical challenges, align solutions with business goals, and implement automation tools to boost efficiency. You will collaborate with stakeholders to understand requirements, develop SQL queries, utilize BI tools such as Tableau, Power BI, and Looker, and work on ETL/ELT pipelines for data transformation. Your role will also involve integrating APIs, scripting for automation, and documenting technical solutions for debugging and future reference. To excel in this role, you should possess 2-5 years of relevant experience in data management, automation, and solution development. Strong expertise in data analysis, SQL, BI tools, and programming languages like Python, R, or JavaScript is essential. Additionally, hands-on experience with API integration, problem-solving abilities, and effective communication skills are crucial for success in this position. Previous experience in UI development would be advantageous. At ThoughtSpot, we value diversity, inclusion, and continuous learning. We believe that a diverse team with varied perspectives and experiences leads to innovative solutions. We encourage individuals from all backgrounds to apply, regardless of whether they meet 100% of the criteria listed. If you are passionate about working in a dynamic environment with talented individuals and contributing to groundbreaking products, we invite you to explore our mission and consider joining our team.,

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

As an Azure Data Engineering Director, you will play a pivotal role in leading the data strategy and operations for our EDP Cloud Fabric. Your expertise will be essential in establishing resilience through a multi-cloud model and enabling key capabilities such as PowerBI and OpenAI from Microsoft. Collaborating with leads across GTIS, CSO, and CTO, you will accelerate the introduction and adoption of new designs on Azure. Your key responsibilities will include defining and executing a comprehensive data strategy aligned with business objectives, leveraging Azure services for innovation in data processing, analytics, and insights delivery. You will architect and manage large-scale data platforms using Azure tools like Azure Data Factory, Azure Synapse Analytics, Databricks, and Cosmos DB, optimizing data engineering pipelines for performance, scalability, and cost-efficiency. Furthermore, you will establish robust data governance frameworks to ensure compliance with industry regulations, oversee data quality, security, and consistency across all platforms, and build, mentor, and retain a high-performing data engineering team. Collaboration with cross-functional stakeholders to bridge technical and business objectives will be a key aspect of your role. You will also ensure data readiness for AI/ML initiatives, drive the adoption of real-time insights through event-driven architectures, streamline ETL/ELT processes for faster data processing and reduced downtime, and identify and implement cutting-edge Azure technologies to create new revenue streams through data-driven innovation. In this role, you will be accountable for building and maintaining data architectures pipelines, designing and implementing data warehouses and data lakes, developing processing and analysis algorithms, and collaborating with data scientists to build and deploy machine learning models. You will manage a business function, provide input to strategic initiatives, and lead a large team or sub-function, embedding a performance culture aligned with the organization's values. Additionally, you will provide expert advice to senior management, manage resourcing and budgeting, and foster compliance within the function. As a Senior Leader, you are expected to demonstrate a clear set of leadership behaviors, including listening and authenticity, energizing and inspiring others, aligning across the enterprise, and developing colleagues. Upholding the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, along with the Barclays Mindset of Empower, Challenge, and Drive, will be essential in creating an environment for colleagues to thrive and deliver to an excellent standard.,

Posted 1 month ago

Apply

1.0 - 5.0 years

0 Lacs

vadodara, gujarat

On-site

Job Title: Data Architect Experience : 3 to 4 Location : Vadodara , Gujarat Contact : 9845135287 Job Summary We are seeking a highly skilled and experienced Data Architect to join our team. As a Data Architect, you will play a crucial role in assessing the current state of our data landscape and working closely with the Head of Data to develop a comprehensive data strategy that aligns with our organisational goals. Your primary responsibility will be to understand and map our current data environments and then help develop a detailed roadmap that will deliver a data estate that enables our business to deliver on its core objectives. Main Duties & Responsibilities The role core duties include but are not limited to: Assess the current state of our data infrastructure, including data sources, storage systems, and data processing pipelines. Collaborate with the Data Ops Director to define and refine the data strategy, taking into account business requirements, scalability, and performance. Design and develop a cloud-based data architecture, leveraging Azure technologies such as Azure Data Lake Storage, Azure Synapse Analytics, and Azure Data Factory. Define data integration and ingestion strategies to ensure smooth and efficient data flow from various sources into the data lake and warehouse. Develop data modelling and schema design to support efficient data storage, retrieval, and analysis. Implement data governance processes and policies to ensure data quality, security, and compliance. Collaborate with cross-functional teams, including data engineers, data scientists, and business stakeholders, to understand data requirements and provide architectural guidance. Conduct performance tuning and optimization of the data infrastructure to meet business and analytical needs. Stay updated with the latest trends and advancements in data management, cloud technologies, and industry best practices. Provide technical leadership and mentorship to junior team members. Key Skills Proven work experience as a Data Architect or in a similar role, with a focus on designing and implementing cloud-based data solutions using Azure technology. Strong knowledge of data architecture principles, data modelling techniques, and database design concepts. Experience with cloud platforms, particularly Azure, and a solid understanding of their data-related services and tools. Proficiency in SQL and one or more programming languages commonly used for data processing and analysis (e.g., Python, R, Scala). Familiarity with data integration techniques, ETL/ELT processes, and data pipeline frameworks. Knowledge of data governance, data security, and compliance practices. Strong analytical and problem-solving skills, with the ability to translate business requirements into scalable and efficient data solutions. Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders. Ability to adapt to a fast-paced and dynamic work environment and manage multiple priorities simultaneously. Working relationships Liaison with stakeholders at all levels of the organisation Communication: Communicate with leadership and colleagues in relation to all business activities Highly articulate and able to explain complex concepts in bite size chunks Strong ability to provide clear written reporting and analysis Personal Qualities Ability to work to deadlines Commercially mindful and able to deliver solution to maximise value Good time management skills and ability to work to deadlines Strong analytical skills Accurate with excellent attention to detail Personal strength and resilience Adaptable and embraces change Reliable, conscientious and hardworking Approachable and professional Show willingness to learn however recognise limits of ability and when to seek advice Knowledge / Key Skills: Essential Desirable Experience of Azure Development and design principals Enterprise level Data warehousing design and implementation Architecture Principles Proficiency in SQL development. Familiarity with data integration techniques, ETL/ELT processes, and data pipeline frameworks. Knowledge of data governance, data security, and compliance practices. Strong experience mapping existing data landscape and developing roadmap to deliver business requirements. Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders. Ability to adapt to a fast-paced and dynamic work environment and manage multiple priorities simultaneously. Knowledge of Enterprise Architecture frameworks (Eg. TOGAF) Programming languages such as R, Python, Scala etc Job Type: Full-time Experience: total work: 1 year (Preferred) Work Location: In person,

Posted 1 month ago

Apply
Page 2 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies