Jobs
Interviews

947 Olap Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Pune, Maharashtra

Remote

R022093 Pune, Maharashtra, India IT Operations Regular Location: India, Remote This is a remote position, so you’ll be working remotely from your home. You may occasionally visit a GoDaddy office to meet with your team for events or meetings. Join Our Team... Demonstrate your passion for helping small businesses achieve their dreams online. By helping to move strategy into action, you will be improving GoDaddy’s outreach to those small business owners whose dreams are the backbone of our company. Take part within a multichannel environment, turning strategic plans into digital marketing campaigns and ultimately influencing our customers’ success! The Marketing Data Analyst will bring to bear their experience and knowledge of marketing data to deliver timely and relevant omni channel marketing experiences to our customers worldwide. Your experience understanding and working with marketing data will be applying a robust marketing technology platform to drive campaign automation and optimization.This will ensure continuous improvement in scaling operations for our customer marketing programs, including Email, SMS, WhatsApp, and new & emerging channels What you'll get to do... Serve as Marketing Data subject matter expert for the Customer Marketing team with extensive knowledge of data including standard methodologies and anti-patterns Play an active role in driving requirements for the implementation and integration of an evolving exceptional marketing automation platform Craft and develop customer segments to be applied over Email, Web, CRM, SMS, WhatsApp, and many other customer touch points Collaborate with cross functional teams in the creation of segmentation and personalisation-based strategies Ongoing analysis of marketing programs and broader business performance to surface key insights and recommendations to help inform our marketing strategy Ensure the accuracy of our outbound marketing campaigns by driving QA and on-going monitoring at all levels all the way up to source data Your experience should include... 4+ years of experience in marketing data management, specialising in data set development for marketing automation and email marketing Minimum 4 years of experience working with SQL syntax, relational and non-relational database models, OLAP, and data driven marketing platforms with proven experience writing and understanding complex queries Expertise in testing/optimization methodologies, performance tuning for self-work and reviews with strong analytical and data presentation abilities Experience collaborating with the MarTech Platform Team, Data Platform, and Marketing Managers to present findings, quickly diagnose and troubleshoot emergent issues Experience in segmentation tools like Message Gears, SQL Server, and AWS database systems such as Redshift, Athena is highly preferred Experience with Data Visualisation tools like Tableau and/or Quick-Sight is preferred You might also have... Four-year bachelor’s degree required; master’s degree is preferred Hands on skills in Python and experience with an enterprise level Marketing Automation platform such as Salesforce Marketing Cloud is preferred Experience working with B2B and B2C data including lead and prospect management is nice to have We've got your back... We offer a range of total rewards that may include paid time off, retirement savings (e.g., 401k, pension schemes), bonus/incentive eligibility, equity grants, participation in our employee stock purchase plan, competitive health benefits, and other family-friendly benefits including parental leave. GoDaddy’s benefits vary based on individual role and location and can be reviewed in more detail during the interview process. We also embrace our diverse culture and offer a range of Employee Resource Groups (Culture). Have a side hustle? No problem. We love entrepreneurs! Most importantly, come as you are and make your own way. About us... GoDaddy is empowering everyday entrepreneurs around the world by providing the help and tools to succeed online, making opportunity more inclusive for all. GoDaddy is the place people come to name their idea, build a professional website, attract customers, sell their products and services, and manage their work. Our mission is to give our customers the tools, insights, and people to transform their ideas and personal initiative into success. To learn more about the company, visit About Us. At GoDaddy, we know diverse teams build better products—period. Our people and culture reflect and celebrate that sense of diversity and inclusion in ideas, experiences and perspectives. But we also know that’s not enough to build true equity and belonging in our communities. That’s why we prioritize integrating diversity, equity, inclusion and belonging principles into the core of how we work every day—focusing not only on our employee experience, but also our customer experience and operations. It’s the best way to serve our mission of empowering entrepreneurs everywhere, and making opportunity more inclusive for all. To read more about these commitments, as well as our representation and pay equity data, check out our Diversity and Pay Parity annual report which can be found on our Diversity Careers page. GoDaddy is proud to be an equal opportunity employer . GoDaddy will consider for employment qualified applicants with criminal histories in a manner consistent with local and federal requirements. Refer to our full EEO policy. Our recruiting team is available to assist you in completing your application. If they could be helpful, please reach out to myrecruiter@godaddy.com. GoDaddy doesn’t accept unsolicited resumes from recruiters or employment agencies.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Patna, Bihar, India

On-site

Job Description This is an exciting opportunity for an experienced industry professional with strong analytical and technical skills to join and add value to a dedicated and friendly team. We are looking for a Data Analyst who is driven by data-driven decision-making and insights. As a core member of the Analytics Team, the candidate will take ownership of data analysis projects by working independently with little supervision. The ideal candidate is a highly resourceful and innovative professional with extensive experience in data analysis, statistical modeling, and data visualization. The candidate must have a strong command of data analysis tools like SAS/SPSS, Power BI/Tableau, or R, along with expertise in MS Excel and MS PowerPoint. The role requires optimizing data collection procedures, generating reports, and applying statistical techniques for hypothesis testing and data interpretation. Key Responsibilities Perform data analysis using tools like SAS, SPSS, Power BI, Tableau, or R. Optimize data collection procedures and generate reports on a weekly, monthly, and quarterly basis. Utilize statistical techniques for hypothesis testing to validate data and interpretations. Apply data mining techniques and OLAP methodologies for in-depth insights. Develop dashboards and data visualizations to present findings effectively. Collaborate with cross-functional teams to define, design, and execute data-driven strategies. Ensure the accuracy and integrity of data used for analysis and reporting. Utilize advanced Excel skills to manipulate and analyze large datasets. Prepare technical documentation and presentations for stakeholders. Candidate Profile Qualifications Qualification: MCA / Graduate / Post Graduate in Statistics or MCA or BE/B.Tech in Computer Science & Engineering, Information Technology, or Electronics. A minimum of 2 years' experience in data analysis using SAS/SPSS, Power BI/Tableau, or R. Proficiency in MS Office with expertise in MS Excel & MS PowerPoint. Strong analytical skills with attention to detail. Experience in data mining and OLAP methodologies. Ability to generate insights and reports based on data trends. Excellent communication and presentation skills. Desired Qualifications Experience in predictive analytics and machine learning techniques. Knowledge of SQL and database management. Familiarity with Python for data analysis. Experience in automating reporting processes. (ref:hirist.tech)

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You have a minimum of 5 years of experience in the MSBI Product suite, particularly in Power BI and DAX. Your role involves data preparation for BI projects, understanding business requirements in a BI context, transforming raw data into meaningful insights using Power BI, and working with SSIS. You are skilled in requirement analysis, design, prototyping, and building enterprise models using Power BI desktop. Your responsibilities also include developing data models, OLAP cubes, and reports while applying best practices to the development lifecycle. You document source-to-target mappings, data dictionaries, and database designs, and identify areas for optimization in data flows. You have a good understanding of DAX queries in Power BI desktop and can create Power BI dashboards, reports, KPI scorecards, and transform manual reports. Additionally, you have experience in visualization, transformation, data analysis, and formatting. Your expertise extends to connecting to data sources, importing and transforming data for Business Intelligence, publishing, and scheduling Power BI reports. You are also involved in the installation and administration of Microsoft SQL Server. Knowledge of EBS Modules like Finance, HCM, and Procurement is considered an advantage. You excel in a fast-paced, dynamic, client-facing role, delivering high-quality work products to exceed expectations. Your leadership, interpersonal, prioritization, multi-tasking, problem-solving, and communication skills are exceptional. Your ability to thrive in a team-oriented environment, manage ambiguity, adapt to new technologies, and solve undefined problems make you a valuable asset.,

Posted 1 week ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Who We Are Addepar is a global technology and data company that helps investment professionals provide the most informed, precise guidance for their clients. Hundreds of thousands of users have entrusted Addepar to empower smarter investment decisions and better advice over the last decade. With client presence in more than 50 countries, Addepar’s platform aggregates portfolio, market and client data for over $7 trillion in assets. Addepar’s open platform integrates with more than 100 software, data and services partners to deliver a complete solution for a wide range of firms and use cases. Addepar embraces a global flexible workforce model with offices in New York City, Salt Lake City, Chicago, London, Edinburgh, Pune, and Dubai. The Role We are currently seeking an experienced Backend Software Engineer with a strong Java background to join Addepar in our Partner Platform team! We are building out a new platform from scratch which will enable third parties to simply and safely engage with Addepar at scale. This team is passionate about handling large volumes of data and the engineering challenges in building the distributed systems responsible for automated data ingestion and transformation. We want people who are hard-working and care deeply about solving hard problems at high scale, delighting customers, and participating in the success of the whole company. We look for dedicated engineers with real technical depth and a desire to understand the end business. If you've designed sophisticated scalable systems, have extensive experience with Java and related technologies, or are just interested in tackling complicated technical, critically important problems, join us! What You’ll Do Work in partnership with engineering partners and other platform users to identify requirements and priorities, and map out solutions for challenging technology and workflow problems. Design, develop, and deploy high-quality Java applications that integrate with various data sources and services. Build technical skills in a high-performing team of engineers in India who can design, develop, and deploy Java-based solutions with a focus on backend services and APIs, and help other teams at Addepar build on top of the Addepar platform. Lay a solid foundation of the software architecture for the team in system design and code development with a strong focus on Java and related technologies. Who You Are B.S., or M.S. in Computer Science or similar technical field of study (or equivalent practical experience). 4+ years of software engineering experience. Expert-level proficiency in backend development, with a focus on Java. Good experience on AWS or any other cloud platform. Experience with databases, SQL, NoSQL, OLAP, and/or data lake architectures. A strong ownership mentality and drive to solve the most important problems. Passion for implementing standard processes with a bias toward smart automation. A rapid learner with robust analytical and problem-solving abilities. Comfortable working in a cloud context, with automated infrastructure and service-oriented architecture. Experience with Java, Spring Boot, RESTful APIs, and related technologies is preferred. Practical knowledge of agile practices with an outlook that prioritizes experimentation and iteration combined with an ability to guide teams toward activities and processes that facilitate optimal outcomes. Our Values Act Like an Owner - Think and operate with intention, purpose and care. Own outcomes. Build Together - Collaborate to unlock the best solutions. Deliver lasting value. Champion Our Clients - Exceed client expectations. Our clients’ success is our success. Drive Innovation - Be bold and unconstrained in problem solving. Transform the industry. Embrace Learning - Engage our community to broaden our perspective. Bring a growth mindset. In addition to our core values, Addepar is proud to be an equal opportunity employer. We seek to bring together diverse ideas, experiences, skill sets, perspectives, backgrounds and identities to drive innovative solutions. We commit to promoting a welcoming environment where inclusion and belonging are held as a shared responsibility. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. PHISHING SCAM WARNING: Addepar is among several companies recently made aware of a phishing scam involving con artists posing as hiring managers recruiting via email, text and social media. The imposters are creating misleading email accounts, conducting remote “interviews,” and making fake job offers in order to collect personal and financial information from unsuspecting individuals. Please be aware that no job offers will be made from Addepar without a formal interview process. Additionally, Addepar will not ask you to purchase equipment or supplies as part of your onboarding process. If you have any questions, please reach out to TAinfo@addepar.com.

Posted 1 week ago

Apply

10.0 - 15.0 years

13 - 17 Lacs

Bengaluru

Work from Office

About Netskope Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter @Netskope . About the role: As a Sr. Staff Engineer on the Data Engineering Team you ll be working on some of the hardest problems in the field of Data, Cloud and Security with a mission to achieve the highest standards of customer success. You will be building blocks of technology that will define Netskope s future. You will leverage open source Technologies around OLAP, OLTP, Streaming, Big Data and ML models. You will help design, and build an end-to-end system to manage the data and infrastructure used to improve security insights for our global customer base. You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Conceiving and building services used by Netskope products to validate, transform, load and perform analytics of large amounts of data using distributed systems with cloud scale and reliability. Helping other teams architect their applications using services from the Data team while using best practices and sound designs. Evaluating many open source technologies to find the best fit for our needs, and contributing to some of them. Working with the Application Development and Product Management teams to scale their underlying services Providing easy-to-use analytics of usage patterns, anticipating capacity issues and helping with long term planning Learning about and designing large-scale, reliable enterprise services. Working with great people in a fun, collaborative environment. Creating scalable data mining and data analytics frameworks using cutting edge tools and techniques Required skills and experience 10+ years of industry experience building highly scalable distributed Data systems Programming experience in Scala, Python, Java or Golang Excellent data structure and algorithm skills Proven good development practices like automated testing, measuring code coverage. Proven experience developing complex Data Platforms and Solutions using Technologies like Spark, Kafka, Kubernetes, Iceberg, Trino, Big Query and other open source databases Experience designing and implementing large, fault-tolerant and distributed systems around columnar data stores. Excellent written and verbal communication skills Bonus points for contributions to the open source community Education B.tech or equivalent required, Masters or equivalent strongly preferred #LI-SK3

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Coimbatore

Work from Office

Should have at least 3 years of experience with Power BI Should have at least 3 years of experience with SSAS, OLAP with strong knowledge in MDX query Should have strong knowledge in working with SSRS and web reporting Should have at least 3 years of experience with SQL Server or Oracle Should have strong knowledge in data marts and data warehouses Should have experience in working with SSIS and DTS process Should have excellent communication and interpersonal skills Envision Software Engineering offers excellent pay with benefits, excellent growth opportunity, and good working conditions with a challenging job profile.

Posted 1 week ago

Apply

3.0 - 6.0 years

6 - 10 Lacs

Bengaluru

Work from Office

" Job Category - Information Technology Job Title - BI Developer What youll Do System integration of heterogeneous data sources and working on technologies used in the design, development, testing, deployment, and operations of DW & BI solutions Create and maintain documentation, architecture designs and data flow diagrams Help to deliver scalable solutions on the MSBI platforms and Hadoop Implement source code versioning, standard methodology tools and processes for ensuring data quality Collaborate with business professionals, application developers and technical staff working in an agile process environment Assist in activities such as Source System Analysis, creating and documenting high level business model design, UAT, project management etc. Skills What you need to succeed 3+ years of relevant work experience SSIS, SSAS, DW, Data Analysis and Business Intelligence Must have expert knowledge of Data warehousing tools SSIS, SSAS, DB Must have expert knowledge of TSQL, stored procedure, database performance tuning. Strong in Data Warehousing, Business Intelligence and Dimensional Modelling concepts with experience in Designing, developing & maintaining ETL, database & OLAP Schema and Public Objects (Attributes, Facts, Metrics etc.) Good to have experience in developing Reports and Dashboards using BI reporting tools like Tableau, SSRS, Power BI etc. Fast learner, analytical and skill to understand multiple businesses, their performance indicators Bachelors degree in Computer Science or equivalent Superb communication and presentation skills ",

Posted 1 week ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Bengaluru

Work from Office

ECMS Req # 533599 Number of Openings 1 Duration of Hiring 6 months years of experience Total 6 - 8 years Relevant 4 - 5 yrs Detailed job description - Skill Set: Required Qualifications: 4+ years of experience in data engineering or warehousing with a focus on Amazon Redshift . Strong proficiency in SQL , with ability to write and optimize complex queries for large datasets. Solid understanding of dimensional modeling , Star Schema , and OLAP vs OLTP data structures. Experience in designing analytical data marts and transforming raw/transactional data into structured analytical formats. Hands-on experience with ETL tools (e. g. , AWS Glue). Familiarity with Amazon Redshift Spectrum , RA3 nodes , and data distribution/sort keys best practices. Comfortable working in cloud-native environments, particularly AWS (S3, Lambda, CloudWatch, IAM, etc. ). Preferred Qualifications: Exposure to data lake integration , external tables , and Redshift-Unload/Copy operations. Experience in BI tools (e. g. , Tableau, QuickSight) to validate and test data integration. Familiarity with Python or PySpark for data transformation scripting. Understanding of CI/CD for data pipelines and version control using Git. Knowledge of data security, encryption, and compliance in a cloud environment. Mandatory Skills(ONLY 2 or 3) Amazon Redshift SQL Vendor Billing range in local currency (per day) INR 8500/day Work Location Any Infosys DC WFO/WFH/Hybrid Hybrid Joining time ( Notice period) As early as possible Is there any working in shifts from standard Daylight (to avoid confusions post onboarding) YES/ NO No BGCHECK before or After onboarding Before - Final BG report

Posted 1 week ago

Apply

3.0 - 7.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Develop, test and support future-ready data solutions for customers across industry verticals. Develop, test, and support end-to-end batch and near real-time data flows/pipelines. Demonstrate understanding of data architectures, modern data platforms, big data, analytics, cloud platforms, data governance and information management and associated technologies. Communicates risks and ensures understanding of these risks. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Graduate with a minimum of 5+ years of related experience required. Experience in modelling and business system designs. Good hands-on experience on DataStage, Cloud-based ETL Services. Have great expertise in writing TSQL code. Well-versed with data warehouse schemas and OLAP techniques. Preferred technical and professional experience Ability to manage and make decisions about competing priorities and resources. Ability to delegate where appropriate. Must be a strong team player/leader. Ability to lead Data transformation projects with multiple junior data engineers. Strong oral written and interpersonal skills for interacting throughout all levels of the organization. Ability to communicate complex business problems and technical solutions.

Posted 1 week ago

Apply

2.0 - 5.0 years

9 - 13 Lacs

Gurugram

Work from Office

Develop, test and support future-ready data solutions for customers across industry verticals. Develop, test, and support end-to-end batch and near real-time data flows/pipelines. Demonstrate understanding of data architectures, modern data platforms, big data, analytics, cloud platforms, data governance and information management and associated technologies. Communicates risks and ensures understanding of these risks. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Graduate with a minimum of 5+ years of related experience required. Experience in modelling and business system designs. Good hands-on experience on DataStage, Cloud-based ETL Services. Have great expertise in writing TSQL code. Well-versed with data warehouse schemas and OLAP techniques. Preferred technical and professional experience Ability to manage and make decisions about competing priorities and resources. Ability to delegate where appropriate. Must be a strong team player/leader. Ability to lead Data transformation projects with multiple junior data engineers. Strong oral written and interpersonal skills for interacting throughout all levels of the organization. Ability to communicate complex business problems and technical solutions.

Posted 1 week ago

Apply

2.0 - 5.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Develop, test and support future-ready data solutions for customers across industry verticals. Develop, test, and support end-to-end batch and near real-time data flows/pipelines. Demonstrate understanding of data architectures, modern data platforms, big data, analytics, cloud platforms, data governance and information management and associated technologies. Communicates risks and ensures understanding of these risks. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Graduate with a minimum of 5+ years of related experience required. Experience in modelling and business system designs. Good hands-on experience on DataStage, Cloud-based ETL Services. Have great expertise in writing TSQL code. Well-versed with data warehouse schemas and OLAP techniques. Preferred technical and professional experience Ability to manage and make decisions about competing priorities and resources. Ability to delegate where appropriate. Must be a strong team player/leader. Ability to lead Data transformation projects with multiple junior data engineers. Strong oral written and interpersonal skills for interacting throughout all levels of the organization. Ability to communicate complex business problems and technical solutions.

Posted 1 week ago

Apply

5.0 years

15 Lacs

Indore

On-site

Job description Out of the box thinker to build innovative AI/ML models : 1. Understand and analyze requirements requiring Machine Learning Models from Product Owners, Customers, and Other Stakeholders 2. Analyze and verify data quality and features 3. Design solutions by choosing the right algorithms, features, and hyperparameters 4. Manage the full lifecycle of ML Models: Data Acquisition, Feature Engineering, Model Development, Training, Verification, Optimization, Deployment, Versioning 5. Augment Enterprise data with publicly available datasets to enrich models features 6. Create strategies for integrating Whiz.AI platform with external enterprise data sources like Databases, Data Warehouses, Analytical Stores, External ML Systems/Algorithms, Hadoop and ERP/CRM systems. Qualifications Technical 5+ years of experience in implementing Machine Learning and Deep Learning models applied to traditional as well as NLP problems Machine Learning-based models: ANN, SVM, Logistic Regression, Gradient Boosting Time Series Anomaly Detections Methods, Hierarchical or Grouped Time Series Forecasting Knowledge of BERT, LSTMs, RNNs, and HMMs applied to Text classification and Text Generation problems Understanding of ML Data processing frameworks like TensorFlow or PyTorch, XGBoost, SciPy, Scikit-Learn, Apache Spark SQL and handling Big Data, databases Excellent knowledge of Python Programming, NumPy, Pandas, and processing JSON, XML, CSV files Non-Technical Good communication analytical skills Self-driven with a strong sense of ownership urgency Preferred Qualifications Preference will be given to the hands-on Deep Learning and NLP application experience Knowledge of Analytical/OLAP/Columnar, Hadoop Ecosystem and NoSQL databases Deep Learning, GANs, Reinforcement Learning R programming, Matlab Knowledge of Lifesciences or Pharmaceutical Industry dataset Interested candidate can share resume at shwetachouhan@valere.io Job Type: Full-time Pay: From ₹1,500,000.00 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Education: Bachelor's (Preferred) Experience: software development: 1 year (Preferred) HTML5: 1 year (Preferred) total work: 5 years (Preferred) Work Location: In person

Posted 1 week ago

Apply

7.0 - 8.0 years

15 - 30 Lacs

Pune, Gurugram, Bengaluru

Work from Office

Skills needed : 7+ Years of proven experience in writing and optimizing complex SQL queries, stored procedures, functions, triggers and views for both OLTP (Online Transaction Processing) and OLAP (Online Analytical Processing) environments. Create and provide optimized SQL views and data sets for business intelligence tools (e.g., Power BI, Tableau, Qlik Sense) and reporting applications. Work closely with BI developers and data analysts to understand their data needs and provide efficient data access solutions. Experience with cloud-based data warehousing platforms (e.g., Snowflake, Databricks, Azure Synapse Analytics). Strong proficiency in SQL (Structured Query Language) and a deep understanding of relational database concepts. Extensive experience with at least one major RDBMS (e.g., Microsoft SQL Server, MySQL, PostgreSQL, Oracle). Solid understanding of database design principles, data modelling, and normalization. Experience with query troubleshooting, performance tuning and query optimization techniques. Nice to have: Knowledge of the Commercial life sciences and bio-pharma industry is highly desirable. Comfortable with commercial dataset: Sales data from Iqvia, Symphony, Komodo, etc., CRM data from Veeva, OCE, etc. Knowledge of scripting languages (e.g., Python, PowerShell) for data manipulation and automation within a data pipeline.

Posted 1 week ago

Apply

0.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka

On-site

About Us Observe.AI is transforming customer service with AI agents that speak, think, and act like your best human agents—helping enterprises automate routine customer calls and workflows, support agents in real time, and uncover powerful insights from every interaction. With Observe.AI, businesses boost automation, deliver faster, more consistent 24/7 service and build stronger customer loyalty. Trusted by brands like Accolade, Prudential, Concentrix, Cox Automotive, and Included Health, Observe.AI is redefining how businesses connect with customers—driving better experiences and lasting relationships at every touchpoint. The Opportunity We are looking for a Senior Data Engineer with strong hands-on experience in building scalable data pipelines and real-time processing systems. You will be part of a high-impact team focused on modernizing our data architecture, enabling self-serve analytics, and delivering high-quality data products. This role is ideal for engineers who love solving complex data challenges, have a growth mindset, and are excited to work on both batch and streaming systems. What you’ll be doing: Build and maintain real-time and batch data pipelines using tools like Kafka, Spark, and Airflow. Contribute to the development of a scalable LakeHouse architecture using modern data formats such as Delta Lake, Hudi, or Iceberg. Optimize data ingestion and transformation workflows across cloud platforms (AWS, GCP, or Azure). Collaborate with Analytics and Product teams to deliver data models, marts, and dashboards that drive business insights. Support data quality, lineage, and observability using modern practices and tools. Participate in Agile processes (Sprint Planning, Reviews) and contribute to team knowledge sharing and documentation. Contribute to building data products for inbound (ingestion) and outbound (consumption) use cases across the organization. Who you are: 5-8 years of experience in data engineering or backend systems with a focus on large-scale data pipelines. Hands-on experience with streaming platforms (e.g., Kafka) and distributed processing tools (e.g., Spark or Flink). Working knowledge of LakeHouse formats (Delta/Hudi/Iceberg) and columnar storage like Parquet. Proficient in building pipelines on AWS, GCP, or Azure using managed services and cloud-native tools. Experience in Airflow or similar orchestration platforms. Strong in data modeling and optimizing data warehouses like Redshift, BigQuery, or Snowflake. Exposure to real-time OLAP tools like ClickHouse, Druid, or Pinot. Familiarity with observability tools such as Grafana, Prometheus, or Loki. Some experience integrating data with MLOps tools like MLflow, SageMaker, or Kubeflow. Ability to work with Agile practices using JIRA, Confluence, and participating in engineering ceremonies. Compensation, Benefits and Perks Excellent medical insurance options and free online doctor consultations Yearly privilege and sick leaves as per Karnataka S&E Act Generous holidays (National and Festive) recognition and parental leave policies Learning & Development fund to support your continuous learning journey and professional development Fun events to build culture across the organization Flexible benefit plans for tax exemptions (i.e. Meal card, PF, etc.) Our Commitment to Inclusion and Belonging Observe.AI is an Equal Employment Opportunity employer that proudly pursues and hires a diverse workforce. Observe AI does not make hiring or employment decisions on the basis of race, color, religion or religious belief, ethnic or national origin, nationality, sex, gender, gender identity, sexual orientation, disability, age, military or veteran status, or any other basis protected by applicable local, state, or federal laws or prohibited by Company policy. Observe.AI also strives for a healthy and safe workplace and strictly prohibits harassment of any kind. We welcome all people. We celebrate diversity of all kinds and are committed to creating an inclusive culture built on a foundation of respect for all individuals. We seek to hire, develop, and retain talented people from all backgrounds. Individuals from non-traditional backgrounds, historically marginalized or underrepresented groups are strongly encouraged to apply. If you are ambitious, make an impact wherever you go, and you're ready to shape the future of Observe.AI, we encourage you to apply.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana

On-site

Job Title: Databricks Developer / Data Engineer Duration - 12 Months with Possible Extension Location: Hyderabad, Telangana (Hybrid) 1-2 days onsite at client location Job Summary: We are seeking a highly skilled Databricks Developer / Data Engineer with 5+ years of experience in building scalable data pipelines, managing large datasets, and optimizing data workflows in cloud environments. The ideal candidate will have hands-on expertise in Azure Databricks, Azure Data Factory, and other Azure-native services, playing a key role in enabling data-driven decision-making across the organization. Key Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines for data ingestion, transformation, and integration Work with both structured and unstructured data from a variety of internal and external sources Collaborate with data analysts, scientists, and engineers to ensure data quality, integrity, and availability Build and manage data lakes, data warehouses, and data models (Azure Databricks, Azure Data Factory, Snowflake, etc.) Optimize performance of large-scale batch and real-time processing systems Implement data governance , metadata management, and data lineage practices Monitor and troubleshoot pipeline issues; perform root cause analysis and proactive resolution Automate data validation and quality checks Ensure compliance with data privacy, security, and regulatory requirements Maintain thorough documentation of architecture, data workflows, and processes Mandatory Qualifications: 5+ years of hands-on experience with: Azure Blob Storage, Azure Data Lake Storage, Azure SQL Database Azure Logic Apps, Azure Data Factory, Azure Databricks, Azure ML Azure DevOps Services, Azure API Management, Webhooks Intermediate-level proficiency in Python scripting and PySpark Basic understanding of Power BI and visualization functionalities Technical Skills & Experience Required: Proficient in SQL and working with both relational and non-relational databases (e.g., SQL, PostgreSQL, MongoDB, Cassandra) Hands-on experience with Apache Spark, Hadoop, Hive for big data processing Proficiency in building scalable data pipelines using Azure Data Factory and Azure Databricks Solid knowledge of cloud-native tools : Delta Lake, Azure ML, Azure DevOps Understanding of data modeling , OLAP/OLTP systems , and data warehousing best practices Experience with CI/CD pipelines , version control with Git , and working with Azure Repos Knowledge of data security , privacy policies, and compliance frameworks Excellent problem-solving , troubleshooting , and analytical skills

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

The Applications Development Intermediate Programmer Analyst position at our company involves actively participating in the establishment and implementation of new or updated application systems and programs in collaboration with the Technology team. Your main goal in this role will be to contribute to applications systems analysis and programming activities. With 6+ years of experience in MicroStrategy SDK development, you will be proficient in Java, JavaScript, jQuery, HTML, CSS REST API, and MicroStrategy. Your responsibilities will include developing schema objects like Attributes, Facts, and Transformations, as well as creating public objects such as filters, prompts, and reports. You will also work on document and dashboard development, intelligent cubes, cube reports, and performance optimization in MSTR and UI technologies. Familiarity with MOLAP, ROLAP, OLAP concepts, CI/CD process, Agile development, MSTR iCube automation, MSTR REST API, Library, and DW concepts will be beneficial. Experience in MSTR version upgrades, collaborating with geographically dispersed teams, and leading small to medium teams of BI developers will be valuable assets. As a UI developer, your skills in JavaScript, jQuery, HTML, CSS REST API, and basics of MicroStrategy will be put to good use. Your qualifications should include a minimum of 6 years of relevant experience in MicroStrategy SDK, SQL, Web SDK. It is essential that you possess clear and concise written and verbal communication skills, problem-solving abilities, decision-making skills, and the capacity to work under pressure while managing deadlines or unexpected changes in expectations or requirements. A Bachelors degree, University degree, or equivalent experience is required for this position, and it is a full-time role within the Technology job family group, specifically in Applications Development. If you require a reasonable accommodation due to a disability for using our search tools or applying for a career opportunity, please review Accessibility at Citi. You can also refer to Citis EEO Policy Statement and the Know Your Rights poster for further information.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

The role of Sr Specialist Visualization & Automation in Hyderabad, India involves defining and leading the Platform engineering of Business intelligence solutions, specifically focusing on Power BI technology. Your key responsibilities include overseeing the creation and management of BI and analytics solutions using strong Power BI skills. You will drive the success of technology usage for solution delivery, best practices, compliance, and enablement of the business. Collaboration with the solution architect and platform architect is crucial in defining visualization architecture patterns based on functional and non-functional requirements. You will also be responsible for driving the DevOps Roadmap to enable Agile ways of working, CI/CD pipeline, and automation for self-serve governance of the Power BI platform in alignment with the Platform lead. Ensuring adherence to security and compliance policies and procedures is paramount in this role. You will play a vital role in defining architecture standards, patterns, and platform solutions while upholding Information Security & Compliance (ISC), legal, ethics, and other compliance policies. The ideal candidate should have 8-10 years of IT experience in Data and Analytics, Visualization, with a strong exposure to Power BI Solution delivery and Platform Automation. Proficiency in database management systems, ETL, OLAP, and data lake technologies is required. Experience in Power BI and knowledge of other visualization technologies are advantageous. A specialization in the Pharma domain and understanding of data usage across the enterprise value chain are desirable. Good interpersonal, written, and verbal communication skills are essential, along with the ability to manage vendor and customer expectations. Your technical expertise, understanding of business processes and systems, and commitment to Novartis Values & Behaviors will be critical in this role. Novartis is committed to creating an inclusive work environment and diverse teams that represent the patients and communities served. By joining Novartis, you will contribute to reimagining medicine to improve and extend people's lives, striving to become the most valued and trusted medicines company globally. If you are passionate about making a difference and want to be part of a community that drives breakthroughs to change patients" lives, consider joining the Novartis Network to explore career opportunities and stay connected with Novartis. Learn more about our commitment to diversity and inclusion and the benefits and rewards we offer to help you thrive both personally and professionally.,

Posted 1 week ago

Apply

10.0 years

0 Lacs

Dehradun, Uttarakhand, India

On-site

We are looking for a skilled Data Modeller with strong experience in the Big Data ecosystem, particularly in the Azure Data Platform and Databricks environment. The ideal candidate should have a deep understanding of data modelling principles and hands-on expertise in building models in modern data architectures such as Unity Catalog and Delta Lake. Key Responsibilities: Design and develop conceptual, logical, and physical data models to support enterprise analytics and reporting needs Build and manage data models in Unity Catalog within the Databricks environment Work across teams to model and structure data in Delta Lake and optimize for performance and reusability Collaborate with data engineers, architects, and analysts to ensure models align with data ingestion, transformation, and business reporting workflows Translate business requirements into scalable and efficient data designs using best practices in data warehousing and Lakehouse architecture Maintain comprehensive documentation, including data dictionaries, data lineage, and metadata Implement and support data governance, data quality, and security controls across datasets and platforms Qualifications and Skills: 10+ years of Hands-on data modelling experience in the Big Data ecosystem, with a strong understanding of OLTP, OLAP, and dimensional modelling Hands – on experience in Data modelling techniques like Kimball, Inmon, Data vault and Dimensional Strong proficiency in data modeling tools (e.g., ER/Studio, ERwin, PowerDesigner, dbt, SQLDBM, or Lucidchart) Experience building and maintaining data models using Unity Catalog in Databricks Proven experience working with the Azure Data Platform, including services like: Azure Data Factory Azure Data Lake Azure Synapse Analytics Azure SQL Database Strong proficiency in SQL and Apache Spark for data transformation and querying Familiarity with Delta Lake, Parquet, and modern data storage formats Knowledge of data cataloging tools such as Azure Purview is a plus Excellent problem-solving skills and ability to work in agile and fast-paced environments Strong communication skills to articulate data concepts to technical and non-technical stakeholders Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field Relevant certifications such as DP-203 (Azure Data Engin About Us: We’re an international team who specialize in building technology products & then helping brands grow with multi-channel demand generation marketing. We have in-house experience working for Fortune companies, e-commerce brands, technology SaaS companies. We have assisted over a dozen billion dollar companies with consulting, technology, operations, and digital agency capabilities in managing their unique brand online. We have a fun and friendly work culture that also encourages employees personally and professionally. EbizON has many values that are important to our success as a company: integrity, creativity, innovation, mindfulness and teamwork. We thrive on the idea of making life better for people by providing them with peace of mind. The people here love what they do because everyone from management all way down understands how much it means living up close-to someones' ideals which allows every day feel less stressful knowing each person has somebody cheering him. Equal Opportunity Employer: EbizON is committed to providing equal opportunity for all employees, and we will consider any qualified applicant without regard to race or other prohibited characteristics. Flexible Timings: Flexible working hours are the new normal. We at EbizON believe giving employees freedom to choose when to work, how to work. It helps them thrive and also balance their life better. Global Clients Exposure: Our goal is to provide excellent customer service and we want our employees to work closely with clients from around the world. That's why you'll find us working closely with clients from around the world through Microsoft Teams, Zoom and other video conferencing tools. Retreats & Celebrations: With annual retreats, quarterly town halls and festive celebrations we have a lot of opportunities to get together. Powered by JazzHR ndhkjYTwXs

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are seeking a skilled Senior Data Modeller to design, implement, and maintain conceptual, logical, and physical data models that support enterprise information management and business intelligence efforts. The ideal candidate will collaborate with business analysts, data architects, and developers to ensure high-quality data models that meet both business and technical requirements. • GCP, Data Modelling (OLTP, OLAP), indexing, DBSchema, CloudSQL, BigQuery • Data Modeller - Hands-on data modelling for OLTP and OLAP systems. • In-Depth knowledge of Conceptual, Logical and Physical data modelling. • Strong understanding of Indexing, partitioning, data sharding with practical experience of having done the same. • Strong understanding of variables impacting database performance for near-real time reporting and application interaction. • Should have working experience on at least one data modelling tool, preferably DBSchema. • People with functional knowledge of the mutual fund industry will be a plus. • Good understanding of GCP databases like AlloyDB, CloudSQL and BigQuery

Posted 1 week ago

Apply

14.0 - 18.0 years

50 - 90 Lacs

Bengaluru

Work from Office

About Netskope Today, theres more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter @Netskope . About the team DSPM is designed to provide comprehensive data visibility and contextual security for the modern AI-driven enterprise. Our platform automatically discovers, classifies, and secures sensitive data across diverse environments including AWS, Azure, Google Cloud, Oracle Cloud , and on-premise infrastructure. DSPM is critical in empowering CISOs and security teams to enforce secure AI practices, reduce compliance risk, and maintain continuous data protection at scale. Whats in it for you We are a distributed team of passionate engineers dedicated to continuous learning and building impactful products. Our core product is built from the ground up in Scala, leveraging the Lightbend stack (Play/Akka). As a key member of the DSPM team, you will contribute to developing innovative and scalable systems designed to protect the exponentially increasing volume of enterprise data. Our platform continuously maps sensitive data across all major cloud providers, and on-prem environments, automatically detecting and classifying sensitive and regulated data types such as PII, PHI, financial, and healthcare information. It flags data at risk of exposure, exfiltration, or misuse and helps prioritize issues based on sensitivity, exposure, and business impact. What you will be doing Drive the enhancement of Data Security Posture Management (DSPM) capabilities, by enabling the detection of sensitive or risky data utilized in (but not limited to) training private LLMs or accessed by public LLMs. Improve the DSPM platform to extend support of the product to all major cloud infrastructures, on-prem deployments, and any new upcoming technologies. Provide technical leadership in all phases of a project, from discovery and planning through implementation and delivery. Contribute to product development: understand customer requirements and work with the product team to design, plan, and implement features. Support customers by investigating and fixing production issues. Help us improve our processes and make the right tradeoffs between agility, cost, and reliability. Collaborate with the teams across geographies. Required skills and experience 12+ years of software development experience with enterprise-grade software. Must have experience in building scalable, high-performance cloud services. Expert coding skills in Scala or Java. Development on cloud platforms including AWS. Deep knowledge on databases and data warehouses (OLTP, OLAP) Analytical and troubleshooting skills. Experience working with Docker and Kubernetes. Able to multitask and wear many hats in a fast-paced environment. This week you might lead the design of a new feature, next week you are fixing a critical bug or improving our CI infrastructure. Cybersecurity experience and adversarial thinking is a plus. Expertise in building REST APIs. Experience leveraging cloud-based AI services (e.g., AWS Bedrock, SageMaker), vector databases, and Retrieval Augmented Generation (RAG) architectures is a plus. Education BSCS or equivalent required, MSCS or equivalent strongly preferred #LI-JB3 Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate. Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskopes Privacy Policy for more details.

Posted 1 week ago

Apply

2.0 - 9.0 years

4 - 11 Lacs

Hyderabad

Work from Office

Career Category Information Systems Job Description Join Amgen s Mission of Serving Patients At Amgen, if you feel like you re part of something bigger, it s because you are. Our shared mission to serve patients living with serious illnesses drives all that we do. Since 1980, we ve helped pioneer the world of biotech in our fight against the world s toughest diseases. With our focus on four therapeutic areas -Oncology, Inflammation, General Medicine, and Rare Disease- we reach millions of patients each year. As a member of the Amgen team, you ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, build, and support data ingestion, transformation, and delivery pipelines across structured and unstructured sources within the enterprise data engineering. Manage and monitor day-to-day operations of the data engineering environment, ensuring high availability, performance, and data integrity. Collaborate with data architects, data governance, platform engineering, and business teams to support data integration use cases across R&D, Clinical, Regulatory, and Commercial functions. Integrate data from laboratory systems, clinical platforms, regulatory systems, and third-party data sources into enterprise data repositories. Implement and maintain metadata capture, data lineage, and data quality checks across pipelines to meet governance and compliance requirements. Support real-time and batch data flows using technologies such as Databricks, Kafka, Delta Lake, or similar. Work within GxP-aligned environments, ensuring compliance with data privacy, audit, and quality control standards. Partner with data stewards and business analysts to support self-service data access, reporting, and analytics enablement. Maintain operational documentation, runbooks, and process automation scripts for continuous improvement of data fabric operations. Participate in incident resolution and root cause analysis, ensuring timely and effective remediation of data pipeline issues. Create documentation, playbooks, and best practices for metadata ingestion, data lineage, and catalog usage. Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must have Skills: Build and maintain data pipelines to ingest and update metadata into enterprise data catalog platforms in biotech or life sciences or pharma. Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. experience in data engineering, data operations, or related roles, with at least 2+ years in life sciences, biotech, or pharmaceutical environments. Experience with cloud platforms (e.g., AWS, Azure, or GCP) for data pipeline and storage solutions. Understanding of data governance frameworks, metadata management, and data lineage tracking. Strong problem-solving skills, attention to detail, and ability to manage multiple priorities in a dynamic environment. Effective communication and collaboration skills to work across technical and business stakeholders. Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Preferred Qualifications: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Basic Qualifications: Master s degree and 3 to 4 + years of Computer Science, IT or related field experience Bachelor s degree and 5 to 8 + years of Computer Science, IT or related field experience Diploma and 7 to 9 years of Computer Science, IT or related field experience Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills : Excellent verbal and written communication skills. High degree of professionalism and interpersonal skills. Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. .

Posted 1 week ago

Apply

15.0 years

0 Lacs

Hyderābād

On-site

Project Role : Data Architect Project Role Description : Define the data requirements and structure for the application. Model and design the application data structure, storage and integration. Must have skills : Microsoft Azure Analytics Services Good to have skills : NA Minimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Architect, you will define the data requirements and structure for the application. Your typical day will involve modeling and designing the application data structure, storage, and integration, ensuring that the architecture aligns with business needs and technical specifications. You will collaborate with various teams to ensure that data flows seamlessly and efficiently throughout the organization, while also addressing any challenges that arise in the data management process. Your role will be pivotal in shaping the data landscape of the organization, enabling informed decision-making and strategic planning. Roles & Responsibilities: A. Function as the Lead Data Architect for a small, simple project/proposal or as a team lead for medium/large sized project or proposal B. Discuss specific Big data architecture and related issues with client architect/team (in area of expertise) C. Analyze and assess the impact of the requirements on the data and its lifecycle D. Lead Big data architecture and design medium-big Cloud based, Big Data and Analytical Solutions using Lambda architecture. E. Breadth of experience in various client scenarios and situations F. Experienced in Big Data Architecture-based sales and delivery G. Thought leadership and innovation H. Lead creation of new data assets & offerings I. Experience in handling OLTP and OLAP data workloads Professional & Technical Skills: A. Strong experience in Azure is preferred with hands-on experience in two or more of these skills : Azure Synapse Analytics, Azure HDInsight, Azure Databricks with PySpark / Scala / SparkSQL, Azure Analysis Services B. Experience in one or more Real-time/Streaming technologies including: Azure Stream Analytics, Azure Data Explorer, Azure Time Series Insights, etc. C. Experience in handling medium to large Big Data implementations D. Candidate must have around 5 years of extensive Big data experience E. Candidate must have 15 years of IT experience and around 5 years of extensive Big data experience (design + build) Additional Information: A. Should be able to drive the technology design meetings, propose technology design and architecture B. Should have excellent client communication skills C. Should have good analytical and problem-solving skills 15 years full time education

Posted 1 week ago

Apply

1.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Years Of Experience : 1Years - 3 years Location: Noida, Indore Requisition Description: Experience in writing and troubleshooting SQL queries Proficient in database and Data warehousing concepts. Proven hands-on experience in designing, developing, and supporting Database projects for analysis Good written and verbal communication skills. Knowledge and/or experience of the following will be added advantage - MDX/ DAX Database design techniques Data modeling SSAS Spark processing Hadoop ecosystem or AWS, Azure, or GCP Cluster and processing Hive Redshift, or Snowflake Linux system Tableau, Micro strategy, Power BI, or any BI tools Programming on Python, Java, or Shell Script Roles and Responsibilities: Interact with senior-most technical and businesspeople of large enterprises to understand their analytics strategy and their problem statements in that area. Understand Customer domain and database schema Designing OLAP semantic models and dashboards Be the go-To person for customers regarding technical issues during the project Efficient task status reporting to stakeholders and customers Be willing to work on off hours to meet the timeline. Be willing to travel or relocate as per project requirement Be willing to work on different technologies

Posted 1 week ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description: QA Automation Engineer As a QA Automation Engineer specializing in Data Warehousing, you will play a critical role in ensuring that our data solutions are of the highest quality. You will work closely with data engineers and analysts to develop, implement, and maintain automated testing frameworks for data validation, ETL processes, data quality, and integration. Your work will ensure that data is accurate, consistent, and performs optimally across our data warehouse systems. Responsibilities Develop and Implement Automation Frameworks: Design, build, and maintain scalable test automation frameworks tailored for data warehousing environments. Test Strategy and Execution: Define and execute automated test strategies for ETL processes, data pipelines, and database integration across a variety of data sources. Data Validation: Implement automated tests to validate data consistency, accuracy, completeness, and transformation logic. Performance Testing: Ensure that the data warehouse systems meet performance benchmarks through automation tools and load testing strategies. Collaborate with Teams: Work closely with data engineers, software developers, and data analysts to understand business requirements and design tests accordingly. Continuous Integration: Integrate automated tests into the CI/CD pipelines, ensuring that testing is part of the deployment process. Defect Tracking and Reporting: Use defect-tracking tools (e.g., JIRA) to log and track issues found during automated testing, ensuring that defects are resolved in a timely manner. Test Data Management: Develop strategies for handling large volumes of test data while maintaining data security and privacy. Tool and Technology Evaluation: Stay current with emerging trends in automation testing for data warehousing and recommend tools, frameworks, and best practices. Job Qualifications: Requirements and skills · At Least 4+ Years Experience Solid understanding of data warehousing concepts (ETL, OLAP, data marts, data vault,star/snowflake schemas, etc.). · Proven experience in building and maintaining automation frameworks using tools like Python, Java, or similar, with a focus on database and ETL testing. · Strong knowledge of SQL for writing complex queries to validate data, test data pipelines, and check transformations. · Experience with ETL tools (e.g., Matillion, Qlik Replicate) and their testing processes. · Performance Testing · Experience with version control systems like Git · Strong analytical and problem-solving skills, with the ability to troubleshoot complex data issues. · Strong communication and collaboration skills. · Attention to detail and a passion for delivering high-quality solutions. · Ability to work in a fast-paced environment and manage multiple priorities. · Enthusiastic about learning new technologies and frameworks. Experience with the following tools and technologies are desired. QLIK Replicate Matillion ETL Snowflake Data Vault Warehouse Design Power BI Azure Cloud – Including Logic App, Azure Functions, ADF

Posted 1 week ago

Apply

10.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Requirements (AWS Data Engineer) >> 3 – 10 years of strong python or Java data engineering experience Experience in Developing Data Pipelines that process large volumes of data using Python, PySpark, Pandas etc, on AWS. Experience in developing ETL, OLAP based and Analytical Applications. Experience in ingesting batch and streaming data from various data sources. Strong Experience in writing complex SQL using any RDBMS (Oracle, PostgreSQL, SQL Server etc.) Exposure to AWS platform's data services (AWS Lambda, Glue, Athena, Redshift, Kinesis etc.) Experience in Airflow DAGS, AWS EMR, S3, IAM and other services Experience working on Test cases using pytest/ unit test or any other framework. Snowflake or Redshift data warehouses Experience of DevOps and CD/CD tools. Familiarity with Rest APIs Experience with CI/CD pipelines, branching strategies, & GIT for code management Bachelor's degree in computer science, information technology, or a similar field You will need to be well spoken and have an easy time establishing productive long lasting working relationships with a large variety of stakeholders Take the lead on data pipeline design with strong analytical skills and a keen eye for detail to really understand and tackle the challenges businesses are facing You will be confronted with a large variety of Data Engineer tools and other new technologies as well with a wide variety of IT, compliance, security related issues. Design and develop world-class technology solutions to solve business problems across multiple client engagements. Collaborate with other teams to understand business requirements, client infrastructure, platforms and overall strategy to ensure seamless transitions. Work closely with AI and A team to build world-class solutions and to define AI strategy. You will possess strong logical structuring and problem-solving skills with expert level understanding of database and have an inherent desire to turn data into actions. Strong verbal, written and presentation skills Comfortable working in Agile projects Clear and precise communication skills Ability to quickly learn and develop expertise in existing highly complex applications and architectures.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies