A dataset represents an SQL query, or more generally, an abstract set of rows in the database. Datasets can be used to create, retrieve, update and delete records.
Query results are always retrieved on demand, so a dataset can be kept around and reused indefinitely (datasets never cache results):
my_posts = DB[:posts].filter(:author => 'david') # no records are retrieved my_posts.all # records are retrieved my_posts.all # records are retrieved again
Most dataset methods return modified copies of the dataset (functional style), so you can reuse different datasets to access data:
posts = DB[:posts] davids_posts = posts.filter(:author => 'david') old_posts = posts.filter('stamp < ?', Date.today - 7) davids_old_posts = davids_posts.filter('stamp < ?', Date.today - 7)
Datasets are Enumerable objects, so they can be manipulated using any of the Enumerable methods, such as map, inject, etc.
Some methods are added via metaprogramming:
COLUMN_CHANGE_OPTS | = | [:select, :sql, :from, :join].freeze | The dataset options that require the removal of cached columns if changed. | |
MUTATION_METHODS | = | %w'add_graph_aliases and distinct exclude exists filter from from_self full_outer_join graph group group_and_count group_by having inner_join intersect invert join left_outer_join limit naked or order order_by order_more paginate query reject reverse reverse_order right_outer_join select select_all select_more set_defaults set_graph_aliases set_overrides sort sort_by unfiltered union unordered where with_sql'.collect{|x| x.to_sym} | All methods that should have a ! method added that modifies the receiver. | |
NOTIMPL_MSG | = | "This method must be overridden in Sequel adapters".freeze | ||
COMMA_SEPARATOR | = | ', '.freeze | ||
COUNT_OF_ALL_AS_COUNT | = | SQL::Function.new(:count, LiteralString.new('*'.freeze)).as(:count) | ||
ARRAY_ACCESS_ERROR_MSG | = | 'You cannot call Dataset#[] with an integer or with no arguments.'.freeze | ||
MAP_ERROR_MSG | = | 'Using Dataset#map with an argument and a block is not allowed'.freeze | ||
GET_ERROR_MSG | = | 'must provide argument or block to Dataset#get, not both'.freeze | ||
IMPORT_ERROR_MSG | = | 'Using Sequel::Dataset#import an empty column array is not allowed'.freeze | ||
PREPARED_ARG_PLACEHOLDER | = | LiteralString.new('?').freeze | ||
AND_SEPARATOR | = | " AND ".freeze | ||
BOOL_FALSE | = | "'f'".freeze | ||
BOOL_TRUE | = | "'t'".freeze | ||
COLUMN_REF_RE1 | = | /\A([\w ]+)__([\w ]+)___([\w ]+)\z/.freeze | ||
COLUMN_REF_RE2 | = | /\A([\w ]+)___([\w ]+)\z/.freeze | ||
COLUMN_REF_RE3 | = | /\A([\w ]+)__([\w ]+)\z/.freeze | ||
COUNT_FROM_SELF_OPTS | = | [:distinct, :group, :sql, :limit, :compounds] | ||
IS_LITERALS | = | {nil=>'NULL'.freeze, true=>'TRUE'.freeze, false=>'FALSE'.freeze}.freeze | ||
IS_OPERATORS | = | ::Sequel::SQL::ComplexExpression::IS_OPERATORS | ||
N_ARITY_OPERATORS | = | ::Sequel::SQL::ComplexExpression::N_ARITY_OPERATORS | ||
NULL | = | "NULL".freeze | ||
QUESTION_MARK | = | '?'.freeze | ||
STOCK_COUNT_OPTS | = | {:select => [SQL::AliasedExpression.new(LiteralString.new("COUNT(*)").freeze, :count)], :order => nil}.freeze | ||
SELECT_CLAUSE_ORDER | = | %w'distinct columns from join where group having compounds order limit'.freeze | ||
TWO_ARITY_OPERATORS | = | ::Sequel::SQL::ComplexExpression::TWO_ARITY_OPERATORS | ||
WILDCARD | = | '*'.freeze |
inner_join | -> | join |
db | [RW] | The database that corresponds to this dataset |
identifier_input_method | [RW] | Set the method to call on identifiers going into the database for this dataset |
identifier_output_method | [RW] | Set the method to call on identifiers coming the database for this dataset |
opts | [RW] | The hash of options for this dataset, keys are symbols. |
quote_identifiers | [W] | Whether to quote identifiers for this dataset |
row_proc | [RW] | The row_proc for this database, should be a Proc that takes a single hash argument and returns the object you want each to return. |
Setup mutation (e.g. filter!) methods. These operate the same as the non-! methods, but replace the options of the current dataset with the options of the resulting dataset.
# File lib/sequel/dataset.rb, line 97 97: def self.def_mutation_method(*meths) 98: meths.each do |meth| 99: class_eval("def #{meth}!(*args, &block); mutation_method(:#{meth}, *args, &block) end") 100: end 101: end
Constructs a new Dataset instance with an associated database and options. Datasets are usually constructed by invoking the Database#[] method:
DB[:posts]
Sequel::Dataset is an abstract class that is not useful by itself. Each database adaptor should provide a subclass of Sequel::Dataset, and have the Database#dataset method return an instance of that class.
# File lib/sequel/dataset.rb, line 83 83: def initialize(db, opts = nil) 84: @db = db 85: @quote_identifiers = db.quote_identifiers? if db.respond_to?(:quote_identifiers?) 86: @identifier_input_method = db.identifier_input_method if db.respond_to?(:identifier_input_method) 87: @identifier_output_method = db.identifier_output_method if db.respond_to?(:identifier_output_method) 88: @opts = opts || {} 89: @row_proc = nil 90: end
Returns the first record matching the conditions. Examples:
ds[:id=>1] => {:id=1}
# File lib/sequel/dataset/convenience.rb, line 13 13: def [](*conditions) 14: raise(Error, ARRAY_ACCESS_ERROR_MSG) if (conditions.length == 1 and conditions.is_a?(Integer)) or conditions.length == 0 15: first(*conditions) 16: end
Adds the give graph aliases to the list of graph aliases to use, unlike set_graph_aliases, which replaces the list. See set_graph_aliases.
# File lib/sequel/dataset/graph.rb, line 168 168: def add_graph_aliases(graph_aliases) 169: ds = select_more(*graph_alias_columns(graph_aliases)) 170: ds.opts[:graph_aliases] = (ds.opts[:graph_aliases] || {}).merge(graph_aliases) 171: ds 172: end
Adds an further filter to an existing filter using AND. If no filter exists an error is raised. This method is identical to filter except it expects an existing filter.
ds.filter(:a).and(:b) # SQL: WHERE a AND b
# File lib/sequel/dataset/sql.rb, line 25 25: def and(*cond, &block) 26: raise(InvalidOperation, "No existing filter found.") unless @opts[:having] || @opts[:where] 27: filter(*cond, &block) 28: end
Returns the average value for the given column.
# File lib/sequel/dataset/convenience.rb, line 27 27: def avg(column) 28: get{|o| o.avg(column)} 29: end
For the given type (:select, :insert, :update, or :delete), run the sql with the bind variables specified in the hash. values is a hash of passed to insert or update (if one of those types is used), which may contain placeholders.
# File lib/sequel/dataset/prepared_statements.rb, line 181 181: def call(type, bind_variables={}, values=nil) 182: prepare(type, nil, values).call(bind_variables) 183: end
SQL fragment for specifying given CaseExpression.
# File lib/sequel/dataset/sql.rb, line 41 41: def case_expression_sql(ce) 42: sql = '(CASE ' 43: sql << "#{literal(ce.expression)} " if ce.expression 44: ce.conditions.collect{ |c,r| 45: sql << "WHEN #{literal(c)} THEN #{literal(r)} " 46: } 47: sql << "ELSE #{literal(ce.default)} END)" 48: end
Returns a new clone of the dataset with with the given options merged. If the options changed include options in COLUMN_CHANGE_OPTS, the cached columns are deleted.
# File lib/sequel/dataset.rb, line 131 131: def clone(opts = {}) 132: c = super() 133: c.opts = @opts.merge(opts) 134: c.instance_variable_set(:@columns, nil) if opts.keys.any?{|o| COLUMN_CHANGE_OPTS.include?(o)} 135: c 136: end
Returns the columns in the result set in order. If the columns are currently cached, returns the cached value. Otherwise, a SELECT query is performed to get a single row. Adapters are expected to fill the columns cache with the column information when a query is performed. If the dataset does not have any rows, this may be an empty array depending on how the adapter is programmed.
If you are looking for all columns for a single table and maybe some information about each column (e.g. type), see Database#schema.
# File lib/sequel/dataset.rb, line 147 147: def columns 148: return @columns if @columns 149: ds = unfiltered.unordered.clone(:distinct => nil, :limit => 1) 150: ds.each{break} 151: @columns = ds.instance_variable_get(:@columns) 152: @columns || [] 153: end
SQL fragment for complex expressions
# File lib/sequel/dataset/sql.rb, line 61 61: def complex_expression_sql(op, args) 62: case op 63: when *IS_OPERATORS 64: v = IS_LITERALS[args.at(1)] || raise(Error, 'Invalid argument used for IS operator') 65: "(#{literal(args.at(0))} #{op} #{v})" 66: when *TWO_ARITY_OPERATORS 67: "(#{literal(args.at(0))} #{op} #{literal(args.at(1))})" 68: when *N_ARITY_OPERATORS 69: "(#{args.collect{|a| literal(a)}.join(" #{op} ")})" 70: when :NOT 71: "NOT #{literal(args.at(0))}" 72: when :NOOP 73: literal(args.at(0)) 74: when 'B~''B~' 75: "~#{literal(args.at(0))}" 76: else 77: raise(Sequel::Error, "invalid operator #{op}") 78: end 79: end
Returns the number of records in the dataset.
# File lib/sequel/dataset/sql.rb, line 82 82: def count 83: options_overlap(COUNT_FROM_SELF_OPTS) ? from_self.count : clone(STOCK_COUNT_OPTS).single_value.to_i 84: end
Add a mutation method to this dataset instance.
# File lib/sequel/dataset.rb, line 163 163: def def_mutation_method(*meths) 164: meths.each do |meth| 165: instance_eval("def #{meth}!(*args, &block); mutation_method(:#{meth}, *args, &block) end") 166: end 167: end
Deletes the records in the dataset. The returned value is generally the number of records deleted, but that is adapter dependent.
# File lib/sequel/dataset.rb, line 171 171: def delete 172: execute_dui(delete_sql) 173: end
Formats a DELETE statement using the given options and dataset options.
dataset.filter{|o| o.price >= 100}.delete_sql #=> "DELETE FROM items WHERE (price >= 100)"
# File lib/sequel/dataset/sql.rb, line 90 90: def delete_sql 91: opts = @opts 92: 93: return static_sql(opts[:sql]) if opts[:sql] 94: 95: if opts[:group] 96: raise InvalidOperation, "Grouped datasets cannot be deleted from" 97: elsif opts[:from].is_a?(Array) && opts[:from].size > 1 98: raise InvalidOperation, "Joined datasets cannot be deleted from" 99: end 100: 101: sql = "DELETE FROM #{source_list(opts[:from])}" 102: 103: if where = opts[:where] 104: sql << " WHERE #{literal(where)}" 105: end 106: 107: sql 108: end
Returns a copy of the dataset with the SQL DISTINCT clause. The DISTINCT clause is used to remove duplicate rows from the output. If arguments are provided, uses a DISTINCT ON clause, in which case it will only be distinct on those columns, instead of all returned columns.
dataset.distinct # SQL: SELECT DISTINCT * FROM items dataset.order(:id).distinct(:id) # SQL: SELECT DISTINCT ON (id) * FROM items ORDER BY id
# File lib/sequel/dataset/sql.rb, line 118 118: def distinct(*args) 119: clone(:distinct => args) 120: end
Iterates over the records in the dataset as they are yielded from the database adapter, and returns self.
# File lib/sequel/dataset.rb, line 177 177: def each(&block) 178: if @opts[:graph] 179: graph_each(&block) 180: else 181: if row_proc = @row_proc 182: fetch_rows(select_sql){|r| yield row_proc.call(r)} 183: else 184: fetch_rows(select_sql, &block) 185: end 186: end 187: self 188: end
Yields a paginated dataset for each page and returns the receiver. Does a count to find the total number of records for this dataset.
# File lib/sequel/extensions/pagination.rb, line 16 16: def each_page(page_size, &block) 17: raise(Error, "You cannot paginate a dataset that already has a limit") if @opts[:limit] 18: record_count = count 19: total_pages = (record_count / page_size.to_f).ceil 20: (1..total_pages).each{|page_no| yield paginate(page_no, page_size, record_count)} 21: self 22: end
Returns true if no records exist in the dataset, false otherwise
# File lib/sequel/dataset/convenience.rb, line 32 32: def empty? 33: get(1).nil? 34: end
Adds an EXCEPT clause using a second dataset object. If all is true the clause used is EXCEPT ALL, which may return duplicate rows.
DB[:items].except(DB[:other_items]).sql #=> "SELECT * FROM items EXCEPT SELECT * FROM other_items"
# File lib/sequel/dataset/sql.rb, line 127 127: def except(dataset, all = false) 128: compound_clone(:except, dataset, all) 129: end
Performs the inverse of Dataset#filter.
dataset.exclude(:category => 'software').sql #=> "SELECT * FROM items WHERE (category != 'software')"
# File lib/sequel/dataset/sql.rb, line 135 135: def exclude(*cond, &block) 136: clause = (@opts[:having] ? :having : :where) 137: cond = cond.first if cond.size == 1 138: cond = filter_expr(cond, &block) 139: cond = SQL::BooleanExpression.invert(cond) 140: cond = SQL::BooleanExpression.new(:AND, @opts[clause], cond) if @opts[clause] 141: clone(clause => cond) 142: end
Returns an EXISTS clause for the dataset as a LiteralString.
DB.select(1).where(DB[:items].exists).sql #=> "SELECT 1 WHERE EXISTS (SELECT * FROM items)"
# File lib/sequel/dataset/sql.rb, line 148 148: def exists 149: LiteralString.new("EXISTS (#{select_sql})") 150: end
Execute the SQL on the database and yield the rows as hashes with symbol keys.
# File lib/sequel/adapters/do.rb, line 180 180: def fetch_rows(sql) 181: execute(sql) do |reader| 182: cols = @columns = reader.fields.map{|f| output_identifier(f)} 183: while(reader.next!) do 184: h = {} 185: cols.zip(reader.values).each{|k, v| h[k] = v} 186: yield h 187: end 188: end 189: self 190: end
Returns a copy of the dataset with the given conditions imposed upon it. If the query already has a HAVING clause, then the conditions are imposed in the HAVING clause. If not, then they are imposed in the WHERE clause.
filter accepts the following argument types:
filter also takes a block, which should return one of the above argument types, and is treated the same way. This block yields a virtual row object, which is easy to use to create identifiers and functions.
If both a block and regular argument are provided, they get ANDed together.
Examples:
dataset.filter(:id => 3).sql #=> "SELECT * FROM items WHERE (id = 3)" dataset.filter('price < ?', 100).sql #=> "SELECT * FROM items WHERE price < 100" dataset.filter([[:id, (1,2,3)], [:id, 0..10]]).sql #=> "SELECT * FROM items WHERE ((id IN (1, 2, 3)) AND ((id >= 0) AND (id <= 10)))" dataset.filter('price < 100').sql #=> "SELECT * FROM items WHERE price < 100" dataset.filter(:active).sql #=> "SELECT * FROM items WHERE :active dataset.filter{|o| o.price < 100}.sql #=> "SELECT * FROM items WHERE (price < 100)"
Multiple filter calls can be chained for scoping:
software = dataset.filter(:category => 'software') software.filter{|o| o.price < 100}.sql #=> "SELECT * FROM items WHERE ((category = 'software') AND (price < 100))"
See doc/dataset_filters.rdoc for more examples and details.
# File lib/sequel/dataset/sql.rb, line 199 199: def filter(*cond, &block) 200: _filter(@opts[:having] ? :having : :where, *cond, &block) 201: end
If a integer argument is given, it is interpreted as a limit, and then returns all matching records up to that limit. If no argument is passed, it returns the first matching record. If any other type of argument(s) is passed, it is given to filter and the first matching record is returned. If a block is given, it is used to filter the dataset before returning anything. Examples:
ds.first => {:id=>7} ds.first(2) => [{:id=>6}, {:id=>4}] ds.order(:id).first(2) => [{:id=>1}, {:id=>2}] ds.first(:id=>2) => {:id=>2} ds.first("id = 3") => {:id=>3} ds.first("id = ?", 4) => {:id=>4} ds.first{|o| o.id > 2} => {:id=>5} ds.order(:id).first{|o| o.id > 2} => {:id=>3} ds.first{|o| o.id > 2} => {:id=>5} ds.first("id > ?", 4){|o| o.id < 6} => {:id=>5} ds.order(:id).first(2){|o| o.id < 2} => [{:id=>1}]
# File lib/sequel/dataset/convenience.rb, line 55 55: def first(*args, &block) 56: ds = block ? filter(&block) : self 57: 58: if args.empty? 59: ds.single_record 60: else 61: args = (args.size == 1) ? args.first : args 62: if Integer === args 63: ds.limit(args).all 64: else 65: ds.filter(args).single_record 66: end 67: end 68: end
The first source (primary table) for this dataset. If the dataset doesn‘t have a table, raises an error. If the table is aliased, returns the aliased name.
# File lib/sequel/dataset/sql.rb, line 205 205: def first_source 206: source = @opts[:from] 207: if source.nil? || source.empty? 208: raise Error, 'No source specified for query' 209: end 210: case s = source.first 211: when Hash 212: s.values.first 213: when Symbol 214: sch, table, aliaz = split_symbol(s) 215: aliaz ? aliaz.to_sym : s 216: else 217: s 218: end 219: end
Returns a copy of the dataset with the source changed.
dataset.from # SQL: SELECT * dataset.from(:blah) # SQL: SELECT * FROM blah dataset.from(:blah, :foo) # SQL: SELECT * FROM blah, foo
# File lib/sequel/dataset/sql.rb, line 226 226: def from(*source) 227: clone(:from=>source.empty? ? nil : source) 228: end
Returns a dataset selecting from the current dataset.
ds = DB[:items].order(:name) ds.sql #=> "SELECT * FROM items ORDER BY name" ds.from_self.sql #=> "SELECT * FROM (SELECT * FROM items ORDER BY name)"
# File lib/sequel/dataset/sql.rb, line 235 235: def from_self 236: fs = {} 237: @opts.keys.each{|k| fs[k] = nil} 238: fs[:from] = [self] 239: clone(fs) 240: end
Return the column value for the first matching record in the dataset. Raises an error if both an argument and block is given.
ds.get(:id) ds.get{|o| o.sum(:id)}
# File lib/sequel/dataset/convenience.rb, line 75 75: def get(column=nil, &block) 76: if column 77: raise(Error, GET_ERROR_MSG) if block 78: select(column).single_value 79: else 80: select(&block).single_value 81: end 82: end
Allows you to join multiple datasets/tables and have the result set split into component tables.
This differs from the usual usage of join, which returns the result set as a single hash. For example:
# CREATE TABLE artists (id INTEGER, name TEXT); # CREATE TABLE albums (id INTEGER, name TEXT, artist_id INTEGER); DB[:artists].left_outer_join(:albums, :artist_id=>:id).first => {:id=>albums.id, :name=>albums.name, :artist_id=>albums.artist_id} DB[:artists].graph(:albums, :artist_id=>:id).first => {:artists=>{:id=>artists.id, :name=>artists.name}, :albums=>{:id=>albums.id, :name=>albums.name, :artist_id=>albums.artist_id}}
Using a join such as left_outer_join, the attribute names that are shared between the tables are combined in the single return hash. You can get around that by using .select with correct aliases for all of the columns, but it is simpler to use graph and have the result set split for you. In addition, graph respects any row_proc of the current dataset and the datasets you use with graph.
If you are graphing a table and all columns for that table are nil, this indicates that no matching rows existed in the table, so graph will return nil instead of a hash with all nil values:
# If the artist doesn't have any albums DB[:artists].graph(:albums, :artist_id=>:id).first => {:artists=>{:id=>artists.id, :name=>artists.name}, :albums=>nil}
Arguments:
# File lib/sequel/dataset/graph.rb, line 47 47: def graph(dataset, join_conditions = nil, options = {}, &block) 48: # Allow the use of a model, dataset, or symbol as the first argument 49: # Find the table name/dataset based on the argument 50: dataset = dataset.dataset if dataset.respond_to?(:dataset) 51: case dataset 52: when Symbol 53: table = dataset 54: dataset = @db[dataset] 55: when ::Sequel::Dataset 56: table = dataset.first_source 57: else 58: raise Error, "The dataset argument should be a symbol, dataset, or model" 59: end 60: 61: # Raise Sequel::Error with explanation that the table alias has been used 62: raise_alias_error = lambda do 63: raise(Error, "this #{options[:table_alias] ? 'alias' : 'table'} has already been been used, please specify " \ 64: "#{options[:table_alias] ? 'a different alias' : 'an alias via the :table_alias option'}") 65: end 66: 67: # Only allow table aliases that haven't been used 68: table_alias = options[:table_alias] || table 69: raise_alias_error.call if @opts[:graph] && @opts[:graph][:table_aliases] && @opts[:graph][:table_aliases].include?(table_alias) 70: 71: # Join the table early in order to avoid cloning the dataset twice 72: ds = join_table(options[:join_type] || :left_outer, table, join_conditions, :table_alias=>table_alias, :implicit_qualifier=>options[:implicit_qualifier], &block) 73: opts = ds.opts 74: 75: # Whether to include the table in the result set 76: add_table = options[:select] == false ? false : true 77: # Whether to add the columns to the list of column aliases 78: add_columns = !ds.opts.include?(:graph_aliases) 79: 80: # Setup the initial graph data structure if it doesn't exist 81: unless graph = opts[:graph] 82: master = ds.first_source 83: raise_alias_error.call if master == table_alias 84: # Master hash storing all .graph related information 85: graph = opts[:graph] = {} 86: # Associates column aliases back to tables and columns 87: column_aliases = graph[:column_aliases] = {} 88: # Associates table alias (the master is never aliased) 89: table_aliases = graph[:table_aliases] = {master=>self} 90: # Keep track of the alias numbers used 91: ca_num = graph[:column_alias_num] = Hash.new(0) 92: # All columns in the master table are never 93: # aliased, but are not included if set_graph_aliases 94: # has been used. 95: if add_columns 96: select = opts[:select] = [] 97: columns.each do |column| 98: column_aliases[column] = [master, column] 99: select.push(SQL::QualifiedIdentifier.new(master, column)) 100: end 101: end 102: end 103: 104: # Add the table alias to the list of aliases 105: # Even if it isn't been used in the result set, 106: # we add a key for it with a nil value so we can check if it 107: # is used more than once 108: table_aliases = graph[:table_aliases] 109: table_aliases[table_alias] = add_table ? dataset : nil 110: 111: # Add the columns to the selection unless we are ignoring them 112: if add_table && add_columns 113: select = opts[:select] 114: column_aliases = graph[:column_aliases] 115: ca_num = graph[:column_alias_num] 116: # Which columns to add to the result set 117: cols = options[:select] || dataset.columns 118: # If the column hasn't been used yet, don't alias it. 119: # If it has been used, try table_column. 120: # If that has been used, try table_column_N 121: # using the next value of N that we know hasn't been 122: # used 123: cols.each do |column| 124: col_alias, identifier = if column_aliases[column] 125: column_alias = "#{table_alias}_#{column}""#{table_alias}_#{column}" 126: if column_aliases[column_alias] 127: column_alias_num = ca_num[column_alias] 128: column_alias = "#{column_alias}_#{column_alias_num}""#{column_alias}_#{column_alias_num}" 129: ca_num[column_alias] += 1 130: end 131: [column_alias, SQL::QualifiedIdentifier.new(table_alias, column).as(column_alias)] 132: else 133: [column, SQL::QualifiedIdentifier.new(table_alias, column)] 134: end 135: column_aliases[col_alias] = [table_alias, column] 136: select.push(identifier) 137: end 138: end 139: ds 140: end
Pattern match any of the columns to any of the terms. The terms can be strings (which use LIKE) or regular expressions (which are only supported in some databases). See Sequel::SQL::StringExpression.like. Note that the total number of pattern matches will be cols.length * terms.length, which could cause performance issues.
dataset.grep(:a, '%test%') # SQL: SELECT * FROM items WHERE a LIKE '%test%' dataset.grep([:a, :b], %w'%test% foo') # SQL: SELECT * FROM items WHERE a LIKE '%test%' OR a LIKE 'foo' OR b LIKE '%test%' OR b LIKE 'foo'
# File lib/sequel/dataset/sql.rb, line 256 256: def grep(cols, terms) 257: filter(SQL::BooleanExpression.new(:OR, *Array(cols).collect{|c| SQL::StringExpression.like(c, *terms)})) 258: end
Returns a copy of the dataset with the results grouped by the value of the given columns.
dataset.group(:id) # SELECT * FROM items GROUP BY id dataset.group(:id, :name) # SELECT * FROM items GROUP BY id, name
# File lib/sequel/dataset/sql.rb, line 265 265: def group(*columns) 266: clone(:group => columns) 267: end
Returns a dataset grouped by the given column with count by group, order by the count of records. Examples:
ds.group_and_count(:name) => [{:name=>'a', :count=>1}, ...] ds.group_and_count(:first_name, :last_name) => [{:first_name=>'a', :last_name=>'b', :count=>1}, ...]
# File lib/sequel/dataset/convenience.rb, line 89 89: def group_and_count(*columns) 90: group(*columns).select(*(columns + [COUNT_OF_ALL_AS_COUNT])).order(:count) 91: end
Returns a copy of the dataset with the HAVING conditions changed. Raises an error if the dataset has not been grouped. See filter for argument types.
dataset.group(:sum).having(:sum=>10) # SQL: SELECT * FROM items GROUP BY sum HAVING sum = 10
# File lib/sequel/dataset/sql.rb, line 274 274: def having(*cond, &block) 275: raise(InvalidOperation, "Can only specify a HAVING clause on a grouped dataset") unless @opts[:group] 276: _filter(:having, *cond, &block) 277: end
Inserts multiple records into the associated table. This method can be to efficiently insert a large amounts of records into a table. Inserts are automatically wrapped in a transaction.
This method is called with a columns array and an array of value arrays:
dataset.import([:x, :y], [[1, 2], [3, 4]])
This method also accepts a dataset instead of an array of value arrays:
dataset.import([:x, :y], other_dataset.select(:a___x, :b___y))
The method also accepts a :slice or :commit_every option that specifies the number of records to insert per transaction. This is useful especially when inserting a large number of records, e.g.:
# this will commit every 50 records dataset.import([:x, :y], [[1, 2], [3, 4], ...], :slice => 50)
# File lib/sequel/dataset/convenience.rb, line 111 111: def import(columns, values, opts={}) 112: return @db.transaction{execute_dui("INSERT INTO #{quote_schema_table(@opts[:from].first)} (#{identifier_list(columns)}) VALUES #{literal(values)}")} if values.is_a?(Dataset) 113: 114: return if values.empty? 115: raise(Error, IMPORT_ERROR_MSG) if columns.empty? 116: 117: if slice_size = opts[:commit_every] || opts[:slice] 118: offset = 0 119: loop do 120: @db.transaction(opts){multi_insert_sql(columns, values[offset, slice_size]).each{|st| execute_dui(st)}} 121: offset += slice_size 122: break if offset >= values.length 123: end 124: else 125: statements = multi_insert_sql(columns, values) 126: @db.transaction{statements.each{|st| execute_dui(st)}} 127: end 128: end
Inserts values into the associated table. The returned value is generally the value of the primary key for the inserted row, but that is adapter dependent.
# File lib/sequel/dataset.rb, line 198 198: def insert(*values) 199: execute_insert(insert_sql(*values)) 200: end
Inserts multiple values. If a block is given it is invoked for each item in the given array before inserting it. See multi_insert as a possible faster version that inserts multiple records in one SQL statement.
# File lib/sequel/dataset/sql.rb, line 283 283: def insert_multiple(array, &block) 284: if block 285: array.each {|i| insert(block[i])} 286: else 287: array.each {|i| insert(i)} 288: end 289: end
Formats an INSERT statement using the given values. If a hash is given, the resulting statement includes column names. If no values are given, the resulting statement includes a DEFAULT VALUES clause.
dataset.insert_sql #=> 'INSERT INTO items DEFAULT VALUES' dataset.insert_sql(1,2,3) #=> 'INSERT INTO items VALUES (1, 2, 3)' dataset.insert_sql(:a => 1, :b => 2) #=> 'INSERT INTO items (a, b) VALUES (1, 2)'
# File lib/sequel/dataset/sql.rb, line 299 299: def insert_sql(*values) 300: return static_sql(@opts[:sql]) if @opts[:sql] 301: 302: from = source_list(@opts[:from]) 303: case values.size 304: when 0 305: values = {} 306: when 1 307: vals = values.at(0) 308: if [Hash, Dataset, Array].any?{|c| vals.is_a?(c)} 309: values = vals 310: elsif vals.respond_to?(:values) 311: values = vals.values 312: end 313: end 314: 315: case values 316: when Array 317: if values.empty? 318: insert_default_values_sql 319: else 320: "INSERT INTO #{from} VALUES #{literal(values)}" 321: end 322: when Hash 323: values = @opts[:defaults].merge(values) if @opts[:defaults] 324: values = values.merge(@opts[:overrides]) if @opts[:overrides] 325: if values.empty? 326: insert_default_values_sql 327: else 328: fl, vl = [], [] 329: values.each do |k, v| 330: fl << literal(String === k ? k.to_sym : k) 331: vl << literal(v) 332: end 333: "INSERT INTO #{from} (#{fl.join(COMMA_SEPARATOR)}) VALUES (#{vl.join(COMMA_SEPARATOR)})" 334: end 335: when Dataset 336: "INSERT INTO #{from} #{literal(values)}" 337: end 338: end
Adds an INTERSECT clause using a second dataset object. If all is true the clause used is INTERSECT ALL, which may return duplicate rows.
DB[:items].intersect(DB[:other_items]).sql #=> "SELECT * FROM items INTERSECT SELECT * FROM other_items"
# File lib/sequel/dataset/sql.rb, line 345 345: def intersect(dataset, all = false) 346: compound_clone(:intersect, dataset, all) 347: end
Inverts the current filter
dataset.filter(:category => 'software').invert.sql #=> "SELECT * FROM items WHERE (category != 'software')"
# File lib/sequel/dataset/sql.rb, line 353 353: def invert 354: having, where = @opts[:having], @opts[:where] 355: raise(Error, "No current filter") unless having || where 356: o = {} 357: o[:having] = SQL::BooleanExpression.invert(having) if having 358: o[:where] = SQL::BooleanExpression.invert(where) if where 359: clone(o) 360: end
SQL fragment specifying a JOIN clause without ON or USING.
# File lib/sequel/dataset/sql.rb, line 363 363: def join_clause_sql(jc) 364: table = jc.table 365: table_alias = jc.table_alias 366: table_alias = nil if table == table_alias 367: tref = table_ref(table) 368: " #{join_type_sql(jc.join_type)} #{table_alias ? as_sql(tref, table_alias) : tref}" 369: end
Returns a joined dataset. Uses the following arguments:
# File lib/sequel/dataset/sql.rb, line 411 411: def join_table(type, table, expr=nil, options={}, &block) 412: if [Symbol, String].any?{|c| options.is_a?(c)} 413: table_alias = options 414: last_alias = nil 415: else 416: table_alias = options[:table_alias] 417: last_alias = options[:implicit_qualifier] 418: end 419: if Dataset === table 420: if table_alias.nil? 421: table_alias_num = (@opts[:num_dataset_sources] || 0) + 1 422: table_alias = "t#{table_alias_num}" 423: end 424: table_name = table_alias 425: else 426: table = table.table_name if table.respond_to?(:table_name) 427: table_name = table_alias || table 428: end 429: 430: join = if expr.nil? and !block_given? 431: SQL::JoinClause.new(type, table, table_alias) 432: elsif Array === expr and !expr.empty? and expr.all?{|x| Symbol === x} 433: raise(Sequel::Error, "can't use a block if providing an array of symbols as expr") if block_given? 434: SQL::JoinUsingClause.new(expr, type, table, table_alias) 435: else 436: last_alias ||= @opts[:last_joined_table] || (first_source.is_a?(Dataset) ? 't1' : first_source) 437: if Sequel.condition_specifier?(expr) 438: expr = expr.collect do |k, v| 439: k = qualified_column_name(k, table_name) if k.is_a?(Symbol) 440: v = qualified_column_name(v, last_alias) if v.is_a?(Symbol) 441: [k,v] 442: end 443: end 444: if block_given? 445: expr2 = yield(table_name, last_alias, @opts[:join] || []) 446: expr = expr ? SQL::BooleanExpression.new(:AND, expr, expr2) : expr2 447: end 448: SQL::JoinOnClause.new(expr, type, table, table_alias) 449: end 450: 451: opts = {:join => (@opts[:join] || []) + [join], :last_joined_table => table_name} 452: opts[:num_dataset_sources] = table_alias_num if table_alias_num 453: clone(opts) 454: end
Reverses the order and then runs first. Note that this will not necessarily give you the last record in the dataset, unless you have an unambiguous order. If there is not currently an order for this dataset, raises an Error.
# File lib/sequel/dataset/convenience.rb, line 140 140: def last(*args, &block) 141: raise(Error, 'No order specified') unless @opts[:order] 142: reverse.first(*args, &block) 143: end
If given an integer, the dataset will contain only the first l results. If given a range, it will contain only those at offsets within that range. If a second argument is given, it is used as an offset.
dataset.limit(10) # SQL: SELECT * FROM items LIMIT 10 dataset.limit(10, 20) # SQL: SELECT * FROM items LIMIT 10 OFFSET 20
# File lib/sequel/dataset/sql.rb, line 462 462: def limit(l, o = nil) 463: return from_self.limit(l, o) if @opts[:sql] 464: 465: if Range === l 466: o = l.first 467: l = l.last - l.first + (l.exclude_end? ? 0 : 1) 468: end 469: l = l.to_i 470: raise(Error, 'Limits must be greater than or equal to 1') unless l >= 1 471: opts = {:limit => l} 472: if o 473: o = o.to_i 474: raise(Error, 'Offsets must be greater than or equal to 0') unless o >= 0 475: opts[:offset] = o 476: end 477: clone(opts) 478: end
Returns a literal representation of a value to be used as part of an SQL expression.
dataset.literal("abc'def\\") #=> "'abc''def\\\\'" dataset.literal(:items__id) #=> "items.id" dataset.literal([1, 2, 3]) => "(1, 2, 3)" dataset.literal(DB[:items]) => "(SELECT * FROM items)" dataset.literal(:x + 1 > :y) => "((x + 1) > y)"
If an unsupported object is given, an exception is raised.
# File lib/sequel/dataset/sql.rb, line 490 490: def literal(v) 491: case v 492: when String 493: return v if v.is_a?(LiteralString) 494: v.is_a?(SQL::Blob) ? literal_blob(v) : literal_string(v) 495: when Symbol 496: literal_symbol(v) 497: when Integer 498: literal_integer(v) 499: when Hash 500: literal_hash(v) 501: when SQL::Expression 502: literal_expression(v) 503: when Float 504: literal_float(v) 505: when BigDecimal 506: literal_big_decimal(v) 507: when NilClass 508: NULL 509: when TrueClass 510: literal_true 511: when FalseClass 512: literal_false 513: when Array 514: literal_array(v) 515: when Time 516: literal_time(v) 517: when DateTime 518: literal_datetime(v) 519: when Date 520: literal_date(v) 521: when Dataset 522: literal_dataset(v) 523: else 524: literal_other(v) 525: end 526: end
Maps column values for each record in the dataset (if a column name is given), or performs the stock mapping functionality of Enumerable. Raises an error if both an argument and block are given. Examples:
ds.map(:id) => [1, 2, 3, ...] ds.map{|r| r[:id] * 2} => [2, 4, 6, ...]
# File lib/sequel/dataset/convenience.rb, line 151 151: def map(column=nil, &block) 152: if column 153: raise(Error, MAP_ERROR_MSG) if block 154: super(){|r| r[column]} 155: else 156: super(&block) 157: end 158: end
Returns the maximum value for the given column.
# File lib/sequel/dataset/convenience.rb, line 161 161: def max(column) 162: get{|o| o.max(column)} 163: end
Returns the minimum value for the given column.
# File lib/sequel/dataset/convenience.rb, line 166 166: def min(column) 167: get{|o| o.min(column)} 168: end
This is a front end for import that allows you to submit an array of hashes instead of arrays of columns and values:
dataset.multi_insert([{:x => 1}, {:x => 2}])
Be aware that all hashes should have the same keys if you use this calling method, otherwise some columns could be missed or set to null instead of to default values.
You can also use the :slice or :commit_every option that import accepts.
# File lib/sequel/dataset/convenience.rb, line 180 180: def multi_insert(hashes, opts={}) 181: return if hashes.empty? 182: columns = hashes.first.keys 183: import(columns, hashes.map{|h| columns.map{|c| h[c]}}, opts) 184: end
Returns an array of insert statements for inserting multiple records. This method is used by multi_insert to format insert statements and expects a keys array and and an array of value arrays.
This method should be overridden by descendants if the support inserting multiple records in a single SQL statement.
# File lib/sequel/dataset/sql.rb, line 534 534: def multi_insert_sql(columns, values) 535: s = "INSERT INTO #{source_list(@opts[:from])} (#{identifier_list(columns)}) VALUES " 536: values.map{|r| s + literal(r)} 537: end
Adds an alternate filter to an existing filter using OR. If no filter exists an error is raised.
dataset.filter(:a).or(:b) # SQL: SELECT * FROM items WHERE a OR b
# File lib/sequel/dataset/sql.rb, line 543 543: def or(*cond, &block) 544: clause = (@opts[:having] ? :having : :where) 545: raise(InvalidOperation, "No existing filter found.") unless @opts[clause] 546: cond = cond.first if cond.size == 1 547: clone(clause => SQL::BooleanExpression.new(:OR, @opts[clause], filter_expr(cond, &block))) 548: end
Returns a copy of the dataset with the order changed. If a nil is given the returned dataset has no order. This can accept multiple arguments of varying kinds, and even SQL functions. If a block is given, it is treated as a virtual row block, similar to filter.
ds.order(:name).sql #=> 'SELECT * FROM items ORDER BY name' ds.order(:a, :b).sql #=> 'SELECT * FROM items ORDER BY a, b' ds.order('a + b'.lit).sql #=> 'SELECT * FROM items ORDER BY a + b' ds.order(:a + :b).sql #=> 'SELECT * FROM items ORDER BY (a + b)' ds.order(:name.desc).sql #=> 'SELECT * FROM items ORDER BY name DESC' ds.order(:name.asc).sql #=> 'SELECT * FROM items ORDER BY name ASC' ds.order{|o| o.sum(:name)}.sql #=> 'SELECT * FROM items ORDER BY sum(name)' ds.order(nil).sql #=> 'SELECT * FROM items'
# File lib/sequel/dataset/sql.rb, line 563 563: def order(*columns, &block) 564: columns += Array(virtual_row_block_call(block)) if block 565: clone(:order => (columns.compact.empty?) ? nil : columns) 566: end
Returns a copy of the dataset with the order columns added to the existing order.
ds.order(:a).order(:b).sql #=> 'SELECT * FROM items ORDER BY b' ds.order(:a).order_more(:b).sql #=> 'SELECT * FROM items ORDER BY a, b'
# File lib/sequel/dataset/sql.rb, line 574 574: def order_more(*columns, &block) 575: order(*Array(@opts[:order]).concat(columns), &block) 576: end
Returns a paginated dataset. The returned dataset is limited to the page size at the correct offset, and extended with the Pagination module. If a record count is not provided, does a count of total number of records for this dataset.
# File lib/sequel/extensions/pagination.rb, line 7 7: def paginate(page_no, page_size, record_count=nil) 8: raise(Error, "You cannot paginate a dataset that already has a limit") if @opts[:limit] 9: paginated = limit(page_size, (page_no - 1) * page_size) 10: paginated.extend(Pagination) 11: paginated.set_pagination_info(page_no, page_size, record_count || count) 12: end
Prepare an SQL statement for later execution. This returns a clone of the dataset extended with PreparedStatementMethods, on which you can call call with the hash of bind variables to do substitution. The prepared statement is also stored in the associated database. The following usage is identical:
ps = prepare(:select, :select_by_name) ps.call(:name=>'Blah') db.call(:select_by_name, :name=>'Blah')
# File lib/sequel/dataset/prepared_statements.rb, line 194 194: def prepare(type, name=nil, values=nil) 195: ps = to_prepared_statement(type, values) 196: db.prepared_statements[name] = ps if name 197: ps 198: end
Create a named prepared statement that is stored in the database (and connection) for reuse.
# File lib/sequel/adapters/jdbc.rb, line 437 437: def prepare(type, name=nil, values=nil) 438: ps = to_prepared_statement(type, values) 439: ps.extend(PreparedStatementMethods) 440: if name 441: ps.prepared_statement_name = name 442: db.prepared_statements[name] = ps 443: end 444: ps 445: end
SQL fragment for the qualifed identifier, specifying a table and a column (or schema and table).
# File lib/sequel/dataset/sql.rb, line 594 594: def qualified_identifier_sql(qcr) 595: [qcr.table, qcr.column].map{|x| [SQL::QualifiedIdentifier, SQL::Identifier, Symbol].any?{|c| x.is_a?(c)} ? literal(x) : quote_identifier(x)}.join('.') 596: end
Translates a query block into a dataset. Query blocks can be useful when expressing complex SELECT statements, e.g.:
dataset = DB[:items].query do select :x, :y, :z filter{|o| (o.x > 1) & (o.y > 2)} order :z.desc end
Which is the same as:
dataset = DB[:items].select(:x, :y, :z).filter{|o| (o.x > 1) & (o.y > 2)}.order(:z.desc)
Note that inside a call to query, you cannot call each, insert, update, or delete (or any method that calls those), or Sequel will raise an error.
# File lib/sequel/extensions/query.rb, line 26 26: def query(&block) 27: copy = clone({}) 28: copy.extend(QueryBlockCopy) 29: copy.instance_eval(&block) 30: clone(copy.opts) 31: end
Adds quoting to identifiers (columns and tables). If identifiers are not being quoted, returns name as a string. If identifiers are being quoted quote the name with quoted_identifier.
# File lib/sequel/dataset/sql.rb, line 601 601: def quote_identifier(name) 602: return name if name.is_a?(LiteralString) 603: name = name.value if name.is_a?(SQL::Identifier) 604: name = input_identifier(name) 605: name = quoted_identifier(name) if quote_identifiers? 606: name 607: end
Whether this dataset quotes identifiers.
# File lib/sequel/dataset.rb, line 217 217: def quote_identifiers? 218: @quote_identifiers 219: end
Separates the schema from the table and returns a string with them quoted (if quoting identifiers)
# File lib/sequel/dataset/sql.rb, line 611 611: def quote_schema_table(table) 612: schema, table = schema_and_table(table) 613: "#{"#{quote_identifier(schema)}." if schema}#{quote_identifier(table)}" 614: end
This method quotes the given name with the SQL standard double quote. should be overridden by subclasses to provide quoting not matching the SQL standard, such as backtick (used by MySQL and SQLite).
# File lib/sequel/dataset/sql.rb, line 619 619: def quoted_identifier(name) 620: "\"#{name.to_s.gsub('"', '""')}\"" 621: end
Split the schema information from the table
# File lib/sequel/dataset/sql.rb, line 631 631: def schema_and_table(table_name) 632: sch = db.default_schema if db 633: case table_name 634: when Symbol 635: s, t, a = split_symbol(table_name) 636: [s||sch, t] 637: when SQL::QualifiedIdentifier 638: [table_name.table, table_name.column] 639: when SQL::Identifier 640: [sch, table_name.value] 641: when String 642: [sch, table_name] 643: else 644: raise Error, 'table_name should be a Symbol, SQL::QualifiedIdentifier, SQL::Identifier, or String' 645: end 646: end
Returns a copy of the dataset with the columns selected changed to the given columns. This also takes a virtual row block, similar to filter.
dataset.select(:a) # SELECT a FROM items dataset.select(:a, :b) # SELECT a, b FROM items dataset.select{|o| o.a, o.sum(:b)} # SELECT a, sum(b) FROM items
# File lib/sequel/dataset/sql.rb, line 655 655: def select(*columns, &block) 656: columns += Array(virtual_row_block_call(block)) if block 657: clone(:select => columns) 658: end
Returns a copy of the dataset selecting the wildcard.
dataset.select(:a).select_all # SELECT * FROM items
# File lib/sequel/dataset/sql.rb, line 663 663: def select_all 664: clone(:select => nil) 665: end
Returns a copy of the dataset with the given columns added to the existing selected columns.
dataset.select(:a).select(:b) # SELECT b FROM items dataset.select(:a).select_more(:b) # SELECT a, b FROM items
# File lib/sequel/dataset/sql.rb, line 672 672: def select_more(*columns, &block) 673: select(*Array(@opts[:select]).concat(columns), &block) 674: end
Formats a SELECT statement
dataset.select_sql # => "SELECT * FROM items"
# File lib/sequel/dataset/sql.rb, line 679 679: def select_sql 680: return static_sql(@opts[:sql]) if @opts[:sql] 681: sql = 'SELECT' 682: select_clause_order.each{|x| send("select_#{x}_sql""select_#{x}_sql", sql)} 683: sql 684: end
Set the server for this dataset to use. Used to pick a specific database shard to run a query against, or to override the default (which is SELECT uses :read_only database and all other queries use the :default database).
# File lib/sequel/dataset.rb, line 224 224: def server(servr) 225: clone(:server=>servr) 226: end
This allows you to manually specify the graph aliases to use when using graph. You can use it to only select certain columns, and have those columns mapped to specific aliases in the result set. This is the equivalent of .select for a graphed dataset, and must be used instead of .select whenever graphing is used. Example:
DB[:artists].graph(:albums, :artist_id=>:id).set_graph_aliases(:artist_name=>[:artists, :name], :album_name=>[:albums, :name], :forty_two=>[:albums, :fourtwo, 42]).first => {:artists=>{:name=>artists.name}, :albums=>{:name=>albums.name, :fourtwo=>42}}
Arguments:
# File lib/sequel/dataset/graph.rb, line 159 159: def set_graph_aliases(graph_aliases) 160: ds = select(*graph_alias_columns(graph_aliases)) 161: ds.opts[:graph_aliases] = graph_aliases 162: ds 163: end
Same as select_sql, not aliased directly to make subclassing simpler.
# File lib/sequel/dataset/sql.rb, line 687 687: def sql 688: select_sql 689: end
Returns true if the table exists. Will raise an error if the dataset has fixed SQL or selects from another dataset or more than one table.
# File lib/sequel/dataset/convenience.rb, line 216 216: def table_exists? 217: raise(Sequel::Error, "this dataset has fixed SQL") if @opts[:sql] 218: raise(Sequel::Error, "this dataset selects from multiple sources") if @opts[:from].size != 1 219: t = @opts[:from].first 220: raise(Sequel::Error, "this dataset selects from a sub query") if t.is_a?(Dataset) 221: @db.table_exists?(t) 222: end
Returns a string in CSV format containing the dataset records. By default the CSV representation includes the column titles in the first line. You can turn that off by passing false as the include_column_titles argument.
This does not use a CSV library or handle quoting of values in any way. If any values in any of the rows could include commas or line endings, you shouldn‘t use this.
# File lib/sequel/dataset/convenience.rb, line 232 232: def to_csv(include_column_titles = true) 233: n = naked 234: cols = n.columns 235: csv = '' 236: csv << "#{cols.join(COMMA_SEPARATOR)}\r\n" if include_column_titles 237: n.each{|r| csv << "#{cols.collect{|c| r[c]}.join(COMMA_SEPARATOR)}\r\n"} 238: csv 239: end
Returns a hash with one column used as key and another used as value. If rows have duplicate values for the key column, the latter row(s) will overwrite the value of the previous row(s). If the value_column is not given or nil, uses the entire hash as the value.
# File lib/sequel/dataset/convenience.rb, line 245 245: def to_hash(key_column, value_column = nil) 246: inject({}) do |m, r| 247: m[r[key_column]] = value_column ? r[value_column] : r 248: m 249: end 250: end
Adds a UNION clause using a second dataset object. If all is true the clause used is UNION ALL, which may return duplicate rows.
DB[:items].union(DB[:other_items]).sql #=> "SELECT * FROM items UNION SELECT * FROM other_items"
# File lib/sequel/dataset/sql.rb, line 708 708: def union(dataset, all = false) 709: compound_clone(:union, dataset, all) 710: end
Updates values for the dataset. The returned value is generally the number of rows updated, but that is adapter dependent.
# File lib/sequel/dataset.rb, line 248 248: def update(values={}) 249: execute_dui(update_sql(values)) 250: end
Formats an UPDATE statement using the given values.
dataset.update_sql(:price => 100, :category => 'software') #=> "UPDATE items SET price = 100, category = 'software'"
Raises an error if the dataset is grouped or includes more than one table.
# File lib/sequel/dataset/sql.rb, line 726 726: def update_sql(values = {}) 727: opts = @opts 728: 729: return static_sql(opts[:sql]) if opts[:sql] 730: 731: if opts[:group] 732: raise InvalidOperation, "A grouped dataset cannot be updated" 733: elsif (opts[:from].size > 1) or opts[:join] 734: raise InvalidOperation, "A joined dataset cannot be updated" 735: end 736: 737: sql = "UPDATE #{source_list(@opts[:from])} SET " 738: set = if values.is_a?(Hash) 739: values = opts[:defaults].merge(values) if opts[:defaults] 740: values = values.merge(opts[:overrides]) if opts[:overrides] 741: # get values from hash 742: values.map do |k, v| 743: "#{[String, Symbol].any?{|c| k.is_a?(c)} ? quote_identifier(k) : literal(k)} = #{literal(v)}" 744: end.join(COMMA_SEPARATOR) 745: else 746: # copy values verbatim 747: values 748: end 749: sql << set 750: if where = opts[:where] 751: sql << " WHERE #{literal(where)}" 752: end 753: 754: sql 755: end
Add a condition to the WHERE clause. See filter for argument types.
dataset.group(:a).having(:a).filter(:b) # SELECT * FROM items GROUP BY a HAVING a AND b dataset.group(:a).having(:a).where(:b) # SELECT * FROM items WHERE b GROUP BY a HAVING a
# File lib/sequel/dataset/sql.rb, line 761 761: def where(*cond, &block) 762: _filter(:where, *cond, &block) 763: end
Returns a copy of the dataset with the static SQL used. This is useful if you want to keep the same row_proc/graph, but change the SQL used to custom SQL.
dataset.with_sql('SELECT * FROM foo') # SELECT * FROM foo
# File lib/sequel/dataset/sql.rb, line 769 769: def with_sql(sql, *args) 770: sql = SQL::PlaceholderLiteralString.new(sql, args) unless args.empty? 771: clone(:sql=>sql) 772: end
Return true if the dataset has a non-nil value for any key in opts.
# File lib/sequel/dataset.rb, line 258 258: def options_overlap(opts) 259: !(@opts.collect{|k,v| k unless v.nil?}.compact & opts).empty? 260: end
Return a cloned copy of the current dataset extended with PreparedStatementMethods, setting the type and modify values.
# File lib/sequel/dataset/prepared_statements.rb, line 204 204: def to_prepared_statement(type, values=nil) 205: ps = clone 206: ps.extend(PreparedStatementMethods) 207: ps.prepared_type = type 208: ps.prepared_modify_values = values 209: ps 210: end